Social Science Research Council Research AMP Just Tech
Citation

The new political ad machine: Policy frameworks for political ads in an age of AI

Author:
Brennen, Scott Babwah; Perault, Matt
Year:
2024

As we approach the 2024 presidential election, policymakers, practitioners, and scholars are assessing the promise and pitfalls of generative artificial intelligence (GAI) in elections. While some practitioners have observed that GAI may help optimize and improve political ad production and targeting, there has been far more concern that GAI will lead to wide-scale disruption of political life. This brief examines the use of GAI in political ads to date, assesses the potential risks and benefits of its use, reviews what existing empirical research can teach us about those risks, and then uses those insights as the basis for a set of recommended policy interventions. Although the use of GAI in political ads use has been limited thus far, we anticipate increased usage in the 2024 election cycle and beyond. Despite the limited use of GAI in political ads to date, a great deal of public commentary has speculated on the potential harm that GAI might bring to political advertising. Those concerns fall into four main categories:

1. Scale: GAI may facilitate an increase in the volume of deceptive content in political ads by lowering the cost and difficulty of producing manipulated content.

2. Authenticity: GAI may produce falsehoods that look more realistic or that appear to come from authentic sources.

3. Personalization: GAI may allow advertisers to better personalize targeted content to smaller audience segments, increasing the effectiveness of deceptive ads.

4. Bias: GAI may exacerbate bias and discrimination in political ads.

Policymakers have moved quickly to introduce proposals to address these concerns. Most proposals have focused on three interventions: watermarks on all GAI content, disclaimers on political ads containing GAI content, and bans on deceptive GAI content in political ads. While there is limited empirical research on GAI in political ads, our reading of the literature considering online misinformation, political ads, and bias in AI models offers five important insights into the potential harm of GAI in political ads:

First, research suggests that the persuasive power of both political ads and online misinformation is often overstated. Political ads likely have more of an effect on behavior – such as voter turnout and fundraising – than on persuasion.

Second, political ads likely have the greatest impact in smaller, down-ballot races where there is less advertising, oversight, or familiarity with candidates.

Third, GAI content has the potential to replicate bias, including racial, gender, and national biases.

Fourth, research on political disclaimers suggests that watermarks and disclaimers are unlikely to significantly curb risks.

Fifth, significant holes in the research remain.

These insights from the literature help to formulate recommendations for policymakers that can mitigate the potential harm of GAI without unduly constraining its potential benefits. Research suggests that policy should focus more on preventing abuse in smaller, down-ballot races and in mitigating bias than on banning deceptive GAI content or requiring disclaimers or watermarks. Although the research points in this direction, holes in the literature remain. The result is that we should approach its insights from a position of curiosity, rather than certainty, and conduct additional research into the impact of GAI on the electoral process. Building on our assessment of the academic literature, we offer ten recommendations for policymakers seeking to limit the potential risks of GAI in political ads. These recommendations fall into two categories: First, public policy should target electoral harms rather than technologies. Second, public policy should promote learning about GAI so that we can govern it more effectively over time