Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

OpenAI Disrupts AI-Deployed Influence Operations

Low-Impact Disinformation Campaigns Based in Russia, China, Iran, Israel
OpenAI Disrupts AI-Deployed Influence Operations
OpenAI says it caught actors from China, Russia, Iran and Israel using its tools to create disinformation. (Image: Shutterstock)

OpenAI said it disrupted five covert influence operations, including some from China and Russia, that attempted to use its artificial intelligence services to manipulate public opinion amid elections.

See Also: Vá à luta com armas mais inteligentes: acelere seu SOC com IA

The threat actors used AI models to generate short comments and longer articles in multiple languages, made up names and bios for social media accounts, conducted open-source research, debugged code, and translated and proofread texts, the company said.

The operations do not appear to have had much impact on audience engagement or the spreading of manipulative messages, rating two on the Brookings Breakout Scale that measures the impact of influence operations. A score of two on a low-to-high scale that tops out at six implies that the manipulative content appeared on several platforms but did not have a breakout impact on the audience.

A recent report from the Alan Turing Institute on the electoral impact of AI-powered covert influence campaigns says that AI has had a limited impact on outcomes - but that it creates second-order risks such as polarization and damaging trust in online sources (see: UK Government Urged to Publish Guidance for Electoral AI).

The campaigns OpenAI discovered have been linked to two operations in Russia, one in China, one in Iran and one at a commercial company in Israel.

A Russian operation dubbed "Bad Grammar" used Telegram to target Ukraine, Moldova, the Baltic states and the United States. The other, called "Doppelganger," posted content about Ukraine.

The Chinese threat actor Spamouflage supported China's work and slammed critics, and an Iranian operation called "Union of Virtual Media" praised Iran and condemned Israel and the United States. The Israel-based operation was run through the private company Stoic, which created content about the Gaza conflict and the Israeli trade union Histadrut.

OpenAI said it was able to identify the AI-supported influence operations due to lack of due diligence by the threat actors. Bad Grammar gave itself away when its propaganda campaigners forgot to remove "refusal messages from our model, exposing their content as AI-generated."

Experts have consistently sounded warnings for much of this record-breaking electoral year about the potential of AI disinformation. The U.S. federal government and industry players have attempted to get ahead of the threat with awareness campaigns and cross-industry partnerships.

Industry experts said they're surprised at the "weak and ineffective" AI-led disinformation campaign. "We all expected bad actors to use LLMs to boost their covert influence campaigns. None of us expected the first exposed AI-powered disinformation attempts to be this weak and ineffective," said Thomas Rid, a professor at Johns Hopkins University's School of Advanced International Studies.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.com, you agree to our use of cookies.