Platform Content Regulation – Some Models and Their Problems

Keller, Daphne

Lawmakers today are increasingly focused on their options for regulating the content we see on online platforms. I described several ambitious regulatory models for doing that in my recent paper, Who Do You Sue? State and Platform Hybrid Power Over Online Speech. This blog post excerpts that discussion, and sketches out potential legal regimes to address major platforms’ function as de facto gatekeepers of online speech and information. Readers I’ve talked to so far have expressed particular interest in the Magic API’s model, which speaks to both speech and competition concerns.

Of course, lawmakers’ options are simpler if their only goal is to make platforms take down more user-generated content. A clumsy law like FOSTA can achieve that goal easily – at the cost of driving a great deal of legal speech offline. I outlined more nuanced options for lawmakers purely pursuing takedown goals in this White Paper, and will offer a shorter rundown of available doctrinal “dials and knobs” in a piece coming out soon from Yale ISP.

The ideas outlined here assume that lawmakers want to shape platform behavior more broadly – including by constraining their discretionary power to take down users’ lawful speech under Terms of Service or Community Guidelines. I refer to arguments that platforms can be compelled to carry content against their will as “must carry” claims. Much of the paper is devoted to the likely Constitutional barriers to such claims or laws. This section also builds on discussions about exactly which speech and exactly which platform operations, including content ranking or amplification, might be affected by must-carry rules.