Social Science Research Council Research AMP Just Tech
Citation

How the Public Views Deletion of Offensive Comments

Author:
Masullo, Gina M.; Gonçalves, João; Weber, Ina; Laban, Aquina; Torres da Silava, Marisa; Hofhuis, Joep
Year:
2021

To find out how people in various countries feel about social media platforms and news organizations deleting offensive comments, the Center for Media Engagement teamed up with researchers in the Netherlands and Portugal. The study looked at three aspects of comment deletion: whether a human moderator or an algorithm deleted the content, the type of deleted content (profanity or hate speech), and the level of detail in the explanation for the deletion.

The findings suggest that social media platforms and newsrooms consider the following when deleting comments:

Moderators should focus more on hate speech, because people see hate speech as more in need of deletion than profanity.
Moderators should explain specifically why content was removed, rather than offer general explanations.
Algorithmic moderators may be perceived equally to human moderators, although specific cultural contexts should be considered because this may not be the case in every country.
THE PROBLEM
Social media platforms and news organizations often delete comments that are offensive as a means to improve online discussions.1 In this project, the Center for Media Engagement teamed up with researchers from Erasmus University in the Netherlands and NOVA University in Portugal to examine how the public perceives comment deletion and the moderators who do it.

We looked at three aspects of comment deletion. We first considered whether a human moderator or an algorithm deleted the comment. Use of algorithms or other forms of artificial intelligence are increasingly seen as a solution2 for comment moderation because, as a Center for Media Engagement study found, the task is emotionally exhausting for humans.3 We also considered whether the type of deleted content (profanity or hate speech) or the level of detail explaining the deletion influenced people’s perceptions about the deletion or the moderator who did it. This project was funded by Facebook. All research was conducted independently.

KEY FINDINGS
Across all three countries, people perceived the deletion of hate speech as more fair and legitimate than the removal of profanity. They also perceived moderators who removed hate speech as being more transparent. U.S. and Dutch participants perceived moderators who removed hate speech as more trustworthy than those who removed profanity, although this was not the case in Portugal.
The type of moderator – human or an algorithm – had no effect on people’s perceptions about the deletion of content in the U.S. or the Netherlands. In Portugal, people perceived deletions by human moderators as more fair and legitimate compared to deletions by algorithms.
People in all three countries felt the platform was being more transparent if it explained in detail why the content was removed. However, the extent of detail explaining the deletion had no effect on perceptions of how fair or legitimate the deletion was or whether people perceived the moderator as trustworthy.
IMPLICATIONS
The findings offer some clear takeaways for social media platforms and news organizations regarding comment deletion:

Moderators should focus more on hate speech, because people see hate speech as more in need of deletion than profanity.
Moderators should explain specifically why content was removed, rather than offer general explanations.
Algorithmic moderators may be perceived equally to human moderators, although specific cultural contexts should be considered because this may not be the case in every country.