To address the limitations of self-regulation and the need to combat online misinformation, domestic policies are increasingly imposing platform liability for content moderation. This study employs qualitative comparative analysis to examine five key national legislations: Germany’s NetzDG, France’s Law No. 2018-1202, Brazil’s Resolution No. 23.732/2024, Singapore’s POFMA, and Turkey’s Law No. 2022-7418. Guided by the UN’s and UNESCO’s human rights-based recommendations for platform governance, the analysis focuses on five dimensions: definitions of content and misinformation, moderation practices, transparency requirements, penalties, and independent oversight. Our findings reveal variations on how misinformation is defined, with most jurisdictions adopting vague formulations. Only Brazil’s resolution explicitly addresses AI-generated content. NetzDG emphasises platform-led enforcement; French and Brazilian jurisdictions rely more on judicial orders; POFMA and Turkey’s law grant discretionary powers to state authorities. Independent oversight – a key safeguard for human rights – is formalised only in France (Arcom) and Germany (regulated self-regulation). Although Turkey designates the BTK as an oversight body, its independence is widely contested. Without an independent regulator, Brazil’s resolution allows judicial assessments to draw on verifications conducted by accredited fact-checkers.
