Social Science Research Council Research AMP Just Tech
Citation

Whose voice counts? The role of large language models in public commenting

Author:
Arsenault, Amelia C; Kreps, Sarah
Publication:
Big Data & Society
Year:
2026

The notice-and-comment period in US federal rulemaking fosters civic engagement but has long been dominated by well-resourced actors with specialized knowledge, creating an “accessibility gap.” Large language models (LLMs) may help bridge this divide by assisting citizens in understanding dense policy documents and drafting effective comments. However, these tools could also exacerbate existing disparities, as those actors already well-represented in the rulemaking process may be better positioned to use LLMs effectively. We employ two empirical tests to assess whether LLMs bridge or reinforce these disparities. First, we conducted a survey experiment where participants submitted mock comments on a policy proposal, with the treatment group using an LLM to assist their responses and the control group completing the task unaided. The LLM made comment-writing easier, as self-reported by participants across education levels, suggesting a potential to expand accessibility. However, it did not improve participants’ self-reported policy comprehension, indicating that LLMs are not a solution to knowledge gaps. Second, reviewers evaluated the quality of comments written with and without assistance. Comments written with LLM assistance consistently received higher ratings regardless of the commenter’s education level. While assistance did not disproportionately benefit any particular educational group, the quality improvement may be especially meaningful for less-educated citizens, helping their submissions reach the threshold of serious consideration by policymakers. LLMs could therefore serve as an entry point for those who have traditionally been underrepresented in the rulemaking process.