Social Science Research Council Research AMP Just Tech
Citation

Measuring Changes Caused by Generative Artificial Intelligence: Setting the Foundations

Author:
Lai, Samantha; Nimmo, Ben; Ruths, Derek; Wanless, Alicia
Year:
2025

Informed policy that leads to beneficial change is extremely challenging to develop without being able to measure the material impacts of GenAI.

In 2024’s so-called year of elections, fears abounded over how generative artificial intelligence (GenAI) would impact voting around the world.1 However, as with other game-changing technologies throughout history, the sociopolitical risks of GenAI extend far beyond direct threats to democracy. As GenAI is leveraged to power “intelligent” products, made available for public use, adopted into routine business and personal activities, and used to refactor whole government and industry workflows, there are major opportunities for these disruptions to have negative consequences as well as positive ones. These consequences will be hard to identify for two reasons. First, GenAI is being integrated into already complex processes. When the outputs of such processes change, it can be hard to trace changes back to their root causes. Second, most processes—whether in industry, government, or our personal lives—are not sufficiently well understood to allow detection of changes, especially those that are just emerging. Informed policy that leads to beneficial change is extremely challenging to develop without being able to measure the material impacts of GenAI on governance, social services, criminal activities, health services, and myriad other aspects of social, political, and personal life. The act of measurement is necessary to help identify negative consequences that warrant prioritization and to understand whether claimed threats are over-hyped or under-recognized. Without measurement, we may fail to target policies directly towards issues that need the most attention. Worse, we may risk making changes that yield worse outcomes than the status quo. This is the central problem we consider here. While democracies should be concerned about the potential impact of new technologies introduced into the information environment, how can those changes (good or bad) be measured? The Information Environment Project at the Carnegie Endowment for International Peace convened a workshop to explore this question in the context of common fears about the use of GenAI. The workshop ran in person with sixteen participants,2 comprising investigators tracking GenAI abuses and researchers with experience measuring change, and was informed by a prior literature review to identify pre-existing knowledge gaps and points of debate. Through the course of the workshop, we identified four foundational questions whose answers will underpin any serious scientific attempts to measure the changes wrought by GenAI in a given information ecosystem: What detection methods can reliably indicate that a piece of content is AI-generated? What is the baseline against which change in the ecosystem will be measured? Is the information ecosystem under observation complex and sprawling, with numerous variables and multiple sub-systems impacting each other, or more controlled, with fewer inputs and variables that a third party can more readily observe? Besides the new technology, what other factors influence the system, and how can they be accounted for?