Social Science Research Council Research AMP Just Tech
Citation

Detecting pro-kremlin disinformation using large language models

Author:
Kramer, Marianne; Golovchenko, Yevgeniy; Hjorth, Frederik
Publication:
Research & Politics
Year:
2025

A growing body of literature examines manipulative information by detecting political mis-/disinformation in text data. This line of research typically involves highly costly manual annotation of text for manual content analysis, and/or training and validating automated downstream approaches. We examine whether Large Language Models (LLMs) can detect pro-Kremlin disinformation about the war in Ukraine, focusing on the case of the downing of the civilian flight MH17. We benchmark methods using a large set of tweets labeled by expert annotators. We show that both open and closed LLMs can accurately detect pro-Kremlin disinformation tweets, outperforming both a research assistant and supervised models used in earlier research and at drastically lower cost compared to either research assistants or crowd workers. Our findings contribute to the literature on mis/-disinformation by showcasing how LLMs can substantially lower the costs of detection even when the labeling requires complex, context-specific knowledge about a given case.