Citation

The CJEU’s new filtering case, the Terrorist Content Regulation, and the future of filtering mandates in the EU

Author:
Keller, Daphne
Year:
2019

For several years now, one of the most hotly contested Internet policy questions in the European Union (EU) has been whether and how platforms like YouTube or Twitter can be required to proactively monitor their users’ posts in search of illegal content. Proposals for platforms to monitor user behavior by deploying technological filters were at the heart of the EU Copyright Directive, which passed into law in 2019; as well as the Terrorist Content Regulation, which is now in the final stages of negotiation. Filters are likely to be central to the coming years’ debates about the pending Digital Services Act, and to discussion of potential changes to the eCommerce Directive, which has structured platforms’ legal responsibility for user content in the EU for almost two decades.

A case decided by the Court of Justice of the European Union (CJEU) in October, Glawischnig-Piesczek v. Facebook Ireland, was widely expected to shed light on the subject of filtering requirements. It did shed light, but only a little. The Court discussed legislative rules that govern filtering under eCommerce Directive Article 15, but not the fundamental rights rules that legislators and Member State courts must apply under the EU Charter. The legal conclusions it reached will complicate fundamental rights analysis and legal paths forward for both “pro-filtering” and “anti-filtering” advocates in the evolving legislative debate. This blog post will briefly discuss the ruling’s relevance for future EU legislation, and in particular for the Terrorist Content Regulation. It builds on the much deeper analysis in Dolphins in the Net, my Stanford CIS White Paper about the Glawischnig-Piesczek AG Opinion.