Social Science Research Council Research AMP Just Tech
Citation

Human–algorithm interactions in platform governance: how Chinese moderators balance conflicting logics in algorithm-assisted decision-making

Author:
Zhao, Lu; Zhang, Ruichen
Publication:
Journal of Computer-Mediated Communication
Year:
2026

Content moderation entails human–algorithm collaborative decision-making in digital platforms subject to multiple socio-structural forces. This article applies perspectives of technology affordances and institutional logics to unpack content moderation as human–algorithm interaction processes grounded in the dynamic network of platform governance. We collaborated with three “super platforms” in China delivering wide-ranging services for research and entered these platforms for fieldwork. Based on participant observations in these platforms and interviews with their staff members, we analyze how the state logic prioritizing socio-political security and the corporation logic stressing commercial interest are embedded in the moderation workflow and distill three affordances of algorithmic decision-making that affect how moderators balance these logics. We find that these affordances play a significant role in helping human moderators balance competing logics: They nudge moderators with divergent preferences into developing comprehensive understandings of governance logics for organizational coherence within the platform company and help moderators strategically integrate the conflicting logics in an iterative process for more accurate decision-making. Moving beyond the human/algorithm divide in existing research stressing the differences and contradictions between algorithmic decision-making and human judgements, our study sheds light on the positive potentials of human–algorithm interactions in addressing shifting requirements of platform governance.Digital platforms generally monitor and regulate their online content to meet with government policies and their own business goals. These requirements are often contradictory: The government prioritizes socio-political security, but for commercial organizations, flexibility and efficiency are more important. Therefore, it is difficult for individual moderators to decide whether a post should be blocked to avoid punishments from government supervisors or kept as it is for more traffic and commercial interest. Our study examines how moderators deal with these contradictions with the help of algorithms automatically screening and filtering harmful content. We find that algorithms help moderators develop balanced understandings of both government policies and commercial pursuits. Furthermore, algorithms are trained to be overly rigid in initial screening to satisfy government requirements, and their decisions are passed on to humans for more careful and flexible judgments to balance the platform’s commercial benefits. Human corrections then lead to necessary adjustments to refine algorithmic models and manual guidelines for more accurate and reasonable decision-making. This way, algorithms can unite moderators holding different values towards a common goal and, more importantly, help moderators develop integrative strategies to satisfy requirements from both sides in making moderation decisions.