Russia uses OpenAI’s AI for disinformation against Ukraine – FT

During operations Doppelganger and Bad Grammar, the Russians tried to deprive Ukraine of partner support by creating content using neural networks.

Entities affiliated with Russia, China, Iran, and Israel are using artificial intelligence tools from OpenAI to create and spread disinformation, in particular, about the war in Ukraine, writes Financial Times.

The OpenAI report states that their artificial intelligence models were used in five covert operations. The content covered the war in Ukraine, the conflict in the Gaza Strip, the Indian elections, and politics in Europe, the United States, and China. AI has also been used to improve productivity, including code debugging and social media activity research.

Russia’s Operation Doppelganger was aimed at undermining support for Ukraine, while Operation Bad Grammar used OpenAI models to debug code to run a Telegram bot and create short comments on political topics to be shared on Telegram.

The Chinese network Spamflag used OpenAI tools to create text and comments on social network X, promoting Beijing’s interests abroad. Israeli company STOIC used AI to create articles and comments on Instagram, Facebook, and X.

OpenAI’s policy prohibits the use of its models to deceive or mislead. The company said it was working to identify disinformation campaigns and create tools to combat them. OpenAI representatives noted that they have already complicated the work of attackers and their models have repeatedly refused to generate the requested content.

Ben Nimmo, OpenAI’s chief researcher in the Intelligence and Investigations department, emphasized that the use of OpenAI models in disinformation campaigns has increased “slightly,” but history shows that influence operations can suddenly intensify if no one is looking for them.

You might also like
Comments
Loading...

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More