-
After a period of self-regulation, countries around the world began to implement regulations for the removal of terrorist content from tech platforms. However, much of this regulation has been criticised […]
-
Considerable attention has been paid by researchers to social media platforms, especially the ‘big companies’, and increasingly also messaging applications, and how effectively they moderate extremist and terrorist content on […]
-
YouTube released a global commitment to reduce the spread of problematic content by actively recommending “trusted” news sources on their platform but did not disclose the criteria used to classify […]
-
Due to its ease of scalability and broad applicability, the use of artificial intelligence (AI) and machine learning in platform management has gained prominence. This has led to widespread debates […]
-
Mediated trust, the internet and artificial intelligence: Ideas, interests, institutions and futures
This paper addresses the question of trust in communication, or mediated trust, with regard to the historical evolution of the Internet and, more recently, debates around the impacts of artificial […]
-
Amid wider discussions of online harassment on social media platforms, recent research has turned to the experiences of social media creators whose compulsory visibility renders them vulnerable to frequent attacks, […]
-
A number of issues have emerged related to how platforms moderate and mitigate “harm.” Although platforms have recently developed more explicit policies in regard to what constitutes “hate speech” and […]
-
Social media platforms make choices about what content is and is not permissible on their platforms. For example, choices about if and how to deal with online harassment and hate […]