The AI-powered machine, known as "CounterCloud," uses the chatbot ChatGPT. Nea Paw noted that the project was meant to delve into the world of online disinformation and influence campaigns and how it is easy to generate "opposing" articles using machine learning systems like ChatGPT by providing it with simple prompts.
With the prompts, CounterCloud then produces different versions of the same article, each with distinct styles and viewpoints, effectively creating fake stories and doubt around the accuracy of the original content.
CounterCloud can also be used to create fake journalists with complete identities including names, information and AI-created profile pictures. The machine can also create sound clips of newsreaders reading article summaries. The system can even customize the tone, style and structure of all the created content to sound "more human-like" and less AI-generated.
Moreover, CounterCloud was programmed to engage with social media by liking and reposting messages aligned with its narrative, as well as crafting "opposing" tweets in response to dissenting viewpoints. (Related: Free speech vs. disinformation comes to a head.)
"I wanted to see AI disinformation work in the real world. The strong language competencies of large language models are perfectly suited to reading and writing fake news articles," Nea Paw revealed.
As part of the testing process, CounterCloud used the so-called propaganda pieces released by the Russian state-backed media outlet Sputnik International as well as the articles released by Russian embassies and Chinese news outlets against the U.S. to measure its effectiveness.
From April to May 2023, CounterCloud was used to actively counter tweets and posts made against the United States. Each time Russian and Chinese news outlets posted content, CounterCloud would respond with AI-generated rebuttals supported by other news articles or opinion pieces.
The project successfully proved the power Nea Paw's algorithm and the strength in using all AI-generated content, from tweets to articles and statements from supposedly real journalists and news outlets. Nea Paw suggested that the content CounterCloud generated was 90 percent convincing most of the time.
Renee DiResta, a technical research manager for the Stanford Internet Observatory, backed Nea Paw's claim, noting that the articles and journalist profiles generated by CounterCloud are fairly convincing. She predicted that state-backed social media agencies and trolls would adopt this tool to work on their disinformation campaign.
Moreover, Micah Musser, a researcher in the field of AI, believes that the 2024 presidential election campaign will increasingly turn to language models to generate promotional content, fundraising emails and attack ads. Though AI-generated text remains generic and easily identifiable, human touch can polish their AI-generated disinformation campaign, making it easily accessible to the public.
According to Nea Paw, the project cost only around $400, underscoring how AI-generated tools can easily produce cheap and mass-produced propaganda.
However, the creators have not yet deployed CounterCloud on the internet for fear of the potential and uncontrollable consequences of disseminating disinformation. "Once the genie is out on the internet, there is no knowing where it would end up," Nea Paw stated.
And though Nea Paw believes in educating the public about how such systems work from the inside and equipping browsers with AI-detection tools, the creator believes that these solutions are not foolproof. "I don't think there is a silver bullet for this, much in the same way there is no silver bullet for phishing attacks, spam or social engineering," said Nea Paw.
Visit FutureScienceNews.com for more on the dangers of AI technology.
Watch this video to hear how Nea Paw created CounterCloud as an experiment on how to use AI to power the spread of disinformation online.