Social Science Research Council Research AMP Just Tech
Citation

Generative AI and Disinformation: Recent Advances, Challenges, and Opportunities

Author:
Bontcheva, Kalina; Bontcheva, Kalina; Papadopoulous, Symeon; Tsalakanidou, Filareti; Gallotti, Riccardo; Dutkiewicz, Lidia; Krack, Noémie; Teyssou, Denis; Nucci, Francesco Severio; Spangenberg, Jochen; Srba, Ivan; Aichroth, Patrick; Cuccovillo, Luca; Verdoliva, Luisa
Year:
2024

Over the past three years, generative AI technology (e.g. DALL-E, ChatGPT) made the sudden leap from research papers and company labs to online services used by hundreds of millions of people, including school children. In the United States alone, 18% of adults had used ChatGPT according to Pew Research in July 2023 (Park & Gelles-Watnick, 2023). As the fluency and affordability of generative AI continues to increase from one month to the next, so does its wide-ranging misuse for the creation of affordable, highly convincing largescale disinformation campaigns. Highly damaging examples of AI-generated disinformation abound, including highly lucrative Facebook ads1 seeking to influence voters through deepfake videos of Moldova’s pro-Western president (Gilbert, 2024). YouTube has also been found to host ads with political deepfake videos, which used voice imitation (RTL Lëtzebuerg, 2023). Beyond videos, AI-generated images have been used to spread disinformation about Gaza (France, 2023; Totth, 2023) and propagate divisive, anti-immigrant narratives (The Journal, 2023). Audio deepfakes have also been reported by fact-checkers, so far, these have mostly focused on fake conversations and statements by politicians (Demagog, 2023; Dobreva, 2023; Bossev, 2023). Russian disinformation campaigns have also weaponised generative AI (e.g. a deep fake video of the Ukrainian president calling for surrender (Kinsella, 2023), an AI-generated conversation between the Ukrainian president and his wife (Demagog, 2023). The countries being targeted span the entire European Union (and beyond), including highly susceptible countries such as Bulgaria (Bossev, 2023; BNT, 2023), where citizens have low levels of media literacy and critical thinking skills, as well as a lack of awareness of the existence of sophisticated AI-generated images, videos, audio, and text.

Platform actions aimed at countering disinformation in posts and adverts have so far also fallen short on detecting and removing harmful AI-generated content. All major social media platforms and chat apps have been impacted. For brevity, here we include only some examples from Facebook (ads (Gilbert, 2024), groups (The Journal, 2023), pages (Bossev, 2023)), YouTube (RTL Lëtzebuerg, 2023), X (France, 2023; Totth, 2023), Instagram (France, 2023; Totth, 2023), TikTok (AFP, USA & AFP Germany, 2023; Marinov, 2023) and Telegram (Starcevic, 2023; Marinov, 2023). AI-generated content (e.g. a fake audio claiming votes are being manipulated in Bulgaria (Dobreva, 2023)) is also being sent by email to media and journalists, with the intention of duping reliable outlets into publishing fake content. Moreover, not only is generative AI used to create highly deceptive disinformation campaigns at low cost, but its existence and proficiency is being weaponised by actors who are propagating false claims that authentic images, videos, and audio content from governments and mainstream media are actually fake. One recent example is from a court case against Tesla, where the company lawyers claimed that a video of Elon Musk was a deepfake (The Guardian & Reuters, 2023). Another example is from Bulgaria where bad actors seeking to discredit the government and the “neoliberal” mainstream media spread false claims through a pro-Kremlin Telegram channel and Facebook pages, labelling as fake an official photo of the Bulgarian prime minister talking at the European Parliament. What these examples demonstrate is that generative AI has had a disruptive hyper-realistic2 effect on citizen’s ability to discern and on the platforms’ and fact-checkers’ abilities to tackle online disinformation. More specifically, from the perspective of verification professionals, the absence of a traceable origin is a complete disruption to their content verification workflows. Until generative AI became a cheap and prolific “author” of fake online content, journalists, fact-checkers, human rights defenders and other professionals mainly relied on being able to trace a given object (text, image, video, audio) back to its original source and thus verify whether the examined content was consistent with reality or if, on the contrary, it came from a decontextualised or manipulated copy. Another particularly troubling consequence of the commodification of generative AI is in its extremely low cost and easy accessibility through websites and mobile applications. There are now numerous online tutorials on YouTube, TikTok, and elsewhere on how to create AIgenerated images or videos (including using AI avatars), either for free or costing as little as tens or hundreds of dollars per month. In comparison, in 2016 the budget of Russia’s Internet Research Agency (IRA) was $1.25 million dollars per month (Intelligence: Senate, c. 2019 – 2021).

The goal of this white paper is to deepen understanding of the disinformation-generation capabilities of state-of-the-art AI, as well as the use of AI in the development of new disinformation detection technologies, along with the associated ethical and legal challenges. We conclude by revisiting the challenges and opportunities brought by generative AI in the context of disinformation production, spread, detection, and debunking.