At the end of May, OpenAI marked a new “first” in its corporate history. It wasn’t an even more powerful language model or a new data partnership, but a report disclosing that bad actors had misused their products to run influence operations. The company had caught five networks of covert propagandists—including players from Russia, China, Iran, and Israel—using their generative AI tools for deceptive tactics that ranged from creating large volumes of social media comments in multiple languages to turning news articles into Facebook posts. The use of these tools, OpenAI noted, seemed intended to improve the quality and quantity of output. AI gives propagandists a productivity boost too.
[…]
Source: Propagandists are using AI too | MIT Technology Review