Social Science Research Council Research AMP Just Tech
Citation

Reducing political polarization through conversations with artificial intelligence

Author:
Hruschka, Timon M J; Appel, Markus
Publication:
Journal of Computer-Mediated Communication
Year:
2026

Political polarization is threatening the welfare of individuals and societies. Connecting insights gained from interpersonal communication to human–machine communication, we hypothesized that positive interactions with artificial intelligence (AI) could reduce polarization between humans. To evaluate this proposition, two experiments were conducted, in which human participants (N = 1,035) communicated with AI chatbots in real time. The bots engaged in different communication styles while opposing the participants’ most polarized political views. Across both experiments, engaging with a counterarguing AI chatbot led to significant issue depolarization. AI chatbots exhibiting high (vs. low) conversational receptiveness and active listening during the AI conversation resulted in stronger affective depolarization toward humans, higher participant intellectual humility, and a greater willingness to engage in future conversations with holders of opposing opinions—AI and humans alike. Our experiments show that large language models are powerful tools for individual depolarization and the promotion of beneficial cognitive processing skills.Extreme political views and negative feelings towards others who disagree politically have become a threat to current democracies. This study tested whether brief conversations with an AI chatbot could reduce extreme views and make people more open to understanding the other political side. In two online experiments, 1,035 U.S. adults chatted live with a chatbot about one of four political topics. The topics were strongly polarized, such as gun regulation or U.S. aid to Ukraine. The bot was programmed to communicate in different ways, and participants chatted with only one version of the bot. One bot counterargued and was firm and direct in its argumentation. Another bot counterargued as well, but showed more acceptance of different views and asked questions. A third bot talked about an unrelated, nonpolitical topic. After the chat, people who received counterarguments from the bot held less extreme views than those who had a chat on a nonpolitical topic. When the bot also accepted others’ positions and asked questions, participants felt warmer toward other people who disagreed with them. They also showed more recognition of possible limits in their knowledge, and they were more willing to engage in future talks across the partisan divide.