Facebook is a step away from creating its global Oversight Board for content moderation. The bylaws for the board, released on Jan. 28, lay out the blueprint for an unprecedented experiment in corporate self-governance for the tech sector. While there’s good reason to be skeptical of whether Facebook itself can fix problems like hate speech and disinformation on the platform, we should pay closer attention to how the board proposes to make decisions.
When Mark Zuckerberg started talking about a “Supreme Court” of Facebook to judge its content decisions two years ago, the company was beset with now-familiar scandals: from Cambridge Analytica’s interference with the U.S. election to genocide in Myanmar exacerbated by posts on the site inciting violence. A collapse of public trust in the company was accompanied by heightened scrutiny from lawmakers demanding change. Yet decisions by Facebook to limit content and speech, like banning nudity or deepfakes, are often met with intense public criticism.
[…]