The Mega Impact of AI-Driven Disinformation Campaigns
The amount of money to run an AI-based disinformation campaign is miniscule compared to the influence the campaign can have on society. As I noted in my recent SecurityWeek piece “Preparing Society for AI-Based Disinformation Campaigns in the 2024 US Elections”, there are four common steps in these efforts: Reconnaissance, content creation, amplification and actualization.
Unlike other threat actors who typically act out of financial motivation, the question here isn’t ‘how do we monetize the campaign?’ It’s ‘how do we effect change?’ They want to change the way people think and act — and change what they believe.
To understand just how effective these campaigns can be, one only needs to look at the campaigns launched prior to the use of AI and ask a few high-level questions. Take Brexit. Who had a vested interest in disrupting the EU? Who rejoiced more than any other parties to see the United Kingdom leave the EU?
At the time, Russia-based threat actors almost certainly created social campaigns manually to polarize UK society and sway the population to believe that their votes were critical to saving the UK from losing its identity and regain its sovereignty. Since at least May 2014, similar campaigns (eg. Project Lakhta) have targeted the US electoral process to further goals of undermining faith in democratic institutions and for personal gain.
For the 2024 US elections, we are already witnessing the application of AI into disinformation campaigns. The scope and scale of these campaigns will be greater than anything we’ve ever seen. Several factors make this true. First, US society is highly polarized. The political left and political right accept the existence of so-called ‘alternative facts’ and conspiracy theories to the point where they are now mainstream. Not only do adults of every age adhere to and amplify them, today’s children in the US are growing up to understand this situation as normal. It is perfectly fine, they are commonly told, to distrust what has become pejoratively labelled ‘the MSM’ aka the mainstream media.
While there are certainly mainstream media channels that have a clear agenda, trustworthy, centrist MSM outlets still exist. Unfortunately, many people say “It’s all conspiracy. They’re lying to you in the mainstream media.” This drives people away from the MSM and toward non-MSM information sources. The irony here is that the non-MSM sources have no strict reporting guidelines. Yet, the default position of people who follow them is to believe everything they report and say.
Society’s Willingness to Believe in AI Deep Fakes
We know that celebrities have proven highly successful over the years at swaying the votes of millions of US voters during elections. And it looks like AI-created celebrities may also provide an outsized influence in 2024 in the US. Consider the online celebrity Miquela Sousa, also known as Lil Miquela or Miquela. ‘She’ is a fictional American character, singer and social media personality. Yet, she was also named one of Time’s 25 Most Influential People on the Internet in 2018. Today, Miquela has over 3 million Instagram followers. She’s done work for major brands like Chanel, Esquire and Prada, earning $10s of millions of dollars.
What does Miquela prove for Election 2024 watchers and threat actors? She proves that American society is willing and ready to interact with and believe in fake people – even when society knows they are fake. Just recently, a man in Belgium committed suicide on the urging of an AI chatbot which underscores how real the threat is and how seriously dangerous AI can be.
What to Do About the New Reality
In the past, debunking of disinformation was the main response to disinformation campaigns. But debunking often made the disinformation problems worse by unintentionally amplifying the disinformation being debunked. That’s because debunking requires the ‘debunker’ to repeat the false information.
So now, we see the rise of ‘prebunking’. Prebunking involves informing the public (or specific target groups) that they will be targeted with specific disinformation in the future. A good example of effective prebunking occurred around the recent attacks on Ukraine by Russia. Ukrainian government officials had credible evidence of an attack which Russia was planning. So, Ukraine’s government made the planned Russian actions public resulting in the actions never taking place.
To effectively prebunk disinformation during the 2024 election cycle, one must do all the groundwork necessary to get the prepositioning correct by making the public aware of six pieces of information:
- The current situation
- How someone is going to put out false information
- How we are prebunking those falsehoods here
- The types of content you can expect to see
- The reason why bad actors would post this type of content
- The reaction the bad actors hope to get
Issuing this type of information prepares people to deal with the disinformation psychologically, particularly when they find themselves reacting to disinformation in a certain way. They can now have the clarity of thought to second-guess their feelings and the information that caused them. They can then temper their handling of the disinformation by asking themselves the following:
- Why am I sharing this?
- Who wants me to share this?
- What message is this sending?
In close elections, preparing just a small percentage of the electorate for anticipated disinformation campaigns can have a dramatic effect on maintaining the honesty of election outcomes. Looking at you, swing states in the US.
Create Your Own Policies
The United States lacks government-run protections against online disinformation campaigns. This is seen by a large gap between EU and US protections. The EU’s Digital Services Act (DSA) regulates social networks and content-sharing platforms to prevent the spread of disinformation. It ensures user safety, protects fundamental rights and creates a fair and open online platform environment. By contrast, the US does little in this arena.
A recent article in The Nation finds “many US social media companies have actually gone backward on regulating misinformation over the past few years…companies like YouTube and Meta have slowed the labeling and removing of political misinformation and election denialism, and Meta now offers the ability to ‘opt out’ of fact-checking services.”
“They are almost gleefully embracing these calls for deregulation of content, content policing, content moderation that are being espoused by folks like Elon Musk,” says Samuel Woolley, assistant professor of journalism and project director for propaganda research at the Center for Media Engagement at University of Texas at Austin, in his interview with The Nation.
All this leaves US citizens and organizations to fend off disinformation themselves. So, how would that look for a typical US business? Well, if you feel that your business may be targeted with disinformation, you can move beyond the current paradigm of checking for indicators of compromise and expand your scope into disinformation. Cross-functional teams can cover the most ground within a business. Additionally, since disinformation campaigns often target an entire industry, organizations within the same industry can share resources to identify and prebunk potential disinformation campaigns or align in defense of an ongoing campaign.
The Need for a Post-Trust Approach to Truth
Today, it is clear that we, as a society, need a post-trust approach to truth. Many mainstream media outlets are already taking such an approach via enhanced fact checking. They perform fact checking for readers and viewers, providing references as they do. This helps audiences to know that the facts cited and reported are correct. For example, the BBC’s service, BBC Verify, has an “X” (formerly Twitter) account called BBC Reality Check. During election campaigning, they perform reality checks on what candidates say, such as during debates. For debates, they will present an entire article that considers every fact and quote cited and clarify the truth for the audience.
One might consider this approach to truth as a Zero Trust for truth. While it reflects a cynical view of the world, it does give the truth something to stand on. In the end, we can only hope that the 2024 elections are a reflection of a society that understands and votes based on their own priorities and hopes, not on disinformation.