Foreign influence campaigns, or information operations, have been widespread in the run-up to the 2024 U.S. presidential election. Influence campaigns are large-scale efforts to shift public opinion, push false narratives or change behaviors among a target population. Russia, China, Iran, Israel and other nations have run these campaigns by exploiting social bots, influencers, media companies and generative AI.

[…]

[Influence campaigns include] which researchers call inauthentic coordinated behavior. [They] identify clusters of social media accounts that post in a synchronized fashion, amplify the same groups of users, share identical sets of links, images or hashtags, or perform suspiciously similar sequences of actions.

[…]

[Researchers] have uncovered many examples of coordinated inauthentic behavior. For example, we found accounts that flood the network with tens or hundreds of thousands of posts in a single day. The same campaign can post a message with one account and then have other accounts that its organizers also control “like” and “unlike” it hundreds of times in a short time span. Once the campaign achieves its objective, all these messages can be deleted to evade detection. Using these tricks, foreign governments and their agents can manipulate social media algorithms that determine what is trending and what is engaging to decide what users see in their feeds.

[…]

One technique increasingly being used is creating and managing armies of fake accounts with generative artificial intelligence. [Researchers] estimate that at least 10,000 accounts like these were active daily on the platform, and that was before X CEO Elon Musk dramatically cut the platform’s trust and safety teams. We also identified a network of 1,140 bots that used ChatGPT to generate humanlike content to promote fake news websites and cryptocurrency scams.

In addition to posting machine-generated content, harmful comments and stolen images, these bots engaged with each other and with humans through replies and retweets.

[…]

These insights suggest that social media platforms should engage in more – not less – content moderation to identify and hinder manipulation campaigns and thereby increase their users’ resilience to the campaigns.

The platforms can do this by making it more difficult for malicious agents to create fake accounts and to post automatically. They can also challenge accounts that post at very high rates to prove that they are human. They can add friction in combination with educational efforts, such as nudging users to reshare accurate information. And they can educate users about their vulnerability to deceptive AI-generated content.

[…]

These types of content moderation would protect, rather than censor, free speech in the modern public squares. The right of free speech is not a right of exposure, and since people’s attention is limited, influence operations can be, in effect, a form of censorship by making authentic voices and opinions less visible.

  • alyaza [they/she]@beehaw.orgM
    link
    fedilink
    arrow-up
    3
    ·
    1 month ago

    (Mostly rhetorical questions, I just strongly believe that you have an incorrect analysis of this situation and what must be done to change it and am hoping to provide other perspectives because you are not getting it…)

    your analysis of the situation is “kamala harris is promising a fascist dictatorship as well […] She is also promising to purge us.” which is, respectfully, a Charlie Brown had hoes level statement. it can be dismissed with prejudice because it’s so obviously false.

    • SinAdjetivos@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 month ago

      You are engaging with a different comment thread, if you would like to engage with that thread and not be dismissive and condescending then go over there. (Lol, seriously why did you feel it necessary to post a screenshot of the og ‘Charlie Brown had hoes’ tweet?! 🤣)

      Would you like to take a pass at answering the above rhetorical questions?