Social Science Research Council Research AMP Just Tech
Citation

The (Un)desirable shield: consequences of perceived effects of warning labels on AI-generated political disinformation

Author:
Wei, Ran; Pu, Jingyi; Lo, Ven-Hwei; Zhang, Xinzhi
Publication:
Information, Communication & Society
Year:
2026

As AI-generated political disinformation proliferates, warning labels have emerged as a defining regulatory intervention. Drawing on Third-Person Effect (TPE) theory, this study investigates how exposure to warning labels on AI-generated disinformation shapes perceptual effects and behavioral consequences during the 2024 U.S. Presidential Election. A national online survey in the U.S. (N = 2,373) examined the impact of warning labels attached to AI-generated political disinformation targeting both Democratic and Republican candidates. Results show that exposure to the warning labels significantly increased perceived effects on both oneself and the general public. These findings support the generalizability of TPE in politically charged environments and highlight its relevance in the domain of AI-generated disinformation. Regarding behavioral outcomes, perceived effects of warning labels on others predict support for restrictive policies and engagement in preventive actions. In contrast, perceived effects on oneself drive individual-level preventive behaviors, such as AI literacy enhancement, but do not lead to greater support for regulatory action. In addition, the perceived social desirability of warning labels was found to moderate these outcomes, particularly for anti-Republican disinformation, amplifying perceived influence among those endorsing the intervention. These findings advance TPE scholarship by highlighting the complex interplay between perception, partisanship, and regulatory attitudes, offering insights for the governance of AI-mediated information environments and the design of communication interventions for safeguarding information integrity.