News

Misinformation: tech companies are removing ‘harmful’ coronavirus content – but who decides what that means? | The Conversation

By Stephanie Alice Baker, Matthew Wade & Michael James Walsh
August 27, 2020

The “infodemic” of misinformation about coronavirus has made it difficult to distinguish accurate information from false and misleading advice. The major technology companies have responded to this challenge by taking the unprecedented move of working together to combat misinformation about COVID-19.

Part of this initiative involves promoting content from government healthcare agencies and other authoritative sources, and introducing measures to identify and remove content that could cause harm. For example, Twitter has broadened its definition of harm to address content that contradicts guidance from authoritative sources of public health information.

Facebook has hired extra fact-checking services to remove misinformation that could lead to imminent physical harm. YouTube has published a COVID-19 Medical Misinformation Policy that disallows “content about COVID-19 that poses a serious risk of egregious harm”.

The problem with this approach is that there is no common understanding of what constitutes harm. The different ways these companies define harm can produce very different results, which undermines public trust in the capacity for tech firms to moderate health information. As we argue in a recent research paper, to address this problem these companies need to be more consistent in how they define harm and more transparent in how they respond to it.

[…]

Source: Misinformation: tech companies are removing ‘harmful’ coronavirus content – but who decides what that means?

Recent Related Items
Help inform the conversation
MediaWell relies on members of the public to submit articles, events, and research.