Leveraging Structured Trusted-Peer Assessments to Combat Misinformation

Jahanbakhsh, Farnaz; Zhang, Amy X.; Karger, David R.
Proceedings of the ACM on Human-Computer Interaction

Platform operators have devoted significant effort to combating misinformation on behalf of their users. Users are also stakeholders in this battle, but their efforts to combat misinformation go unsupported by the platforms. In this work, we consider three new user affordances that give social media users greater power in their fight against misinformation: (1) the ability to provide structured accuracy assessments of posts, (2) user-specified indication of trust in other users, and (3) and user configuration of social feed filters according to assessed accuracy. To understand the potential of these designs, we conducted a need-finding survey of 192 people who share and discuss news on social media, finding that many already act to limit or combat misinformation, albeit by repurposing existing platform affordances that lack customized structure for information assessment. We then conducted a field study of a prototype social media platform that implements these user affordances as structured inputs to directly impact how and whether posts are shown. The study involved 14 participants who used the platform for a week to share news while collectively assessing their accuracy. We report on users' perception and use of these affordances. We also provide design implications for platforms and researchers based on our empirical observations.