Over the last decade, social media platforms have too often fallen short on their promise to connect and empower people and have instead become tools optimised to engage, enrage and addict them. The business model of the dominant platforms has created a profit incentive for platforms to prioritise user engagement over safety, with algorithmic recommender systems focused on keeping people clicking and scrolling as long as possible, which in turn allows the companies to sell more ad space, thereby generating revenue. There is mounting evidence of the harms caused by ranking and recommending content being optimised for engagement. Ranking algorithms optimised for engagement select emotive and extreme content, and show it to people who they predict are most likely to engage with it (where “engage with” means they will scroll/stop scrolling to view or watch, click, reply, retweet, etc.). Meta’s own internal research disclosed that a significant proportion (64%) of new joiners to extremist groups were caused by their own recommender systems. Even more alarmingly, in November 2023, Amnesty International found that TikTok’s algorithms exposed multiple accounts of 13-year-old children to videos glorifying suicide within less than an hour of launching the account. By determining how users find information and how they interact with all types of commercial and noncommercial content, recommender systems are a crucial design layer of Very Large Online Platforms (VLOPs)1 regulated by the Digital Services Act (DSA).2 Because of the specific risks they pose, recommender systems warrant urgent and special attention from regulators to ensure that platforms mitigate against “systemic risks”. Article 34 of the DSA defines “systemic risks” by reference to “actual or foreseeable negative effects” on the exercise of fundamental rights, dissemination of illegal content, civic discourse and electoral processes, public security and gender-based violence, as well as on the protection of public health and minors and physical and mental well-being.
As shown in our previous briefing, “Prototyping User Empowerment”, there are many ways for companies to mitigate against systemic risks, including by providing features that would encourage individuals to make conscious choices regarding content curation, promoting safer online behaviours and healthier habits.3 This transition towards authentic personalisation (i.e. an experience actively shaped by users) must start with VLOPs making their platforms safe-by-default. Unfortunately, this cannot be achieved with one quick switch. It will involve reMARCH 2024 CONTENT 1 Executive Summary 4 Setting the context 8 A call for safe and qualitydriven Recommender Systems by default 18 Conclusion 19 Endnotes Safe by Default Moving away from engagement-based rankings towards safe, rights-respecting, and human centric recommender systems 2 designing many elements of the platform. This includes new features to actively promote more conscious user choice, opening up the social network infrastructure to third party content curation services, as well as measures aimed at protecting users from addictive and predatory design features. In this briefing, we outline five categories of changes to the default settings of today’s dominant social media platforms which will make their functioning safer, rights-respecting and human-centric.