Microtargeting, Automation, and Forgery: Disinformation in the Age of Artificial Intelligence

Arsenault, Amelia
University of Ottowa Research

In recent years, analysis of the contemporary security environment has become saturated with discussions about the threat posed by disinformation, defined as a systematic and deliberate effort to undermine the political and social structure of one’s adversaries through the dissemination of propaganda, misleading exaggerations, and falsehoods. From the advent of the printing press, to the contemporary technologies of the Internet and social media, the mediums through which the citizenry engage in political debate and receive political news are embedded with structural biases that influence the ways in which citizens understand the informational landscape. Consequently, the development of communications technologies has also transformed the distinct forms, range, and efficiency of disinformation campaigns. Recently, advances in the field of artificial intelligence (AI) have garnered the attention of international relations scholars. While tactical AI, capable of amassing, analyzing, and learning from extensive amounts of data, has been heralded for its ability to offer critical advantages to those actors capable of employing it effectively, this emerging technology also provides salient opportunities for disinformation efforts. This paper asks: how does the advent of AI transform the scale and efficiency of disinformation campaigns targeted against democratic states? The proliferation of AI introduces
three critical transformations that exacerbate the scope, scale, and efficiency of contemporary disinformation campaigns: AI, which uses advanced algorithms and social media data to precisely target segments of the electorate, provides adversarial actors with a tool for the microtargeted exploitation of pre-existing political fissures and biases. Secondly, AI has allowed for the automation of political propaganda, as exemplified by the use of botnets leading up to elections. Lastly, AI’s ability to integrate machine learning and neural network capabilities allows for the production of convincing AI-produced propaganda that seems authentic. This paper concludes with an analysis of the unique challenges that liberal democracies face in confronting the threat posed by disinformation in the age of AI. Policy responses must ensure that they do not inadvertently bolster the very narratives that they seek to disprove. For
example, efforts to regulate speech, ‘debunk’ falsehoods, or adopt technological responses risk strengthening those narratives that seek to undermine key liberal democratic values. Policy responses must recognize that AI-facilitated disinformation campaigns are precision-targeted, and designed to resonate with pre-existing inclinations, biases, and beliefs; policy must therefore address the underlying domestic contentions and fissures that can be exploited by adversarial actors. Policy responses must avoid characterizing those individuals who do believe in
conspiracies and falsehoods as ignorant, populist ‘dupes’, as these denigrating narratives may confirm anti-elitist suspicions and entice these individuals towards the very narratives that counter-disinformation efforts aim to address. As AI continues to proliferate globally, liberal democratic states face distinct challenges in addressing disinformation that utilizes this emerging technology; these policy responses must respect key liberal democratic values and consider the pre-existing political and social conditions that allow disinformation to flourish and erode liberal democratic institutions and processes, without inadvertently bolstering the narratives that they seek to counter.