Around 75% of Internet users are from non-English speaking countries in the Majority World (i.e., Global South). Yet social media companies allocate most of their content moderation resources to English speaking populations in the West. The disparity in platforms’ content moderation efforts has led to human rights violations and unjust moderation outcomes in the Majority World. To fill this critical gap, researchers from these regions have focused on boosting automated detection of harmful content in local languages that are often underrepresented in digital spaces and lack robust technological support.
[…]