Tim Wu’s essay, Will Artificial Intelligence Eat the Law?, posits that automated decisionmaking systems may be taking the place of human adjudication in social media content moderation. Conventional adjudicative processes, he explains, are too slow or clumsy to keep up with the speed and scale of online information flows. Their eclipse is imminent, inevitable, and, he concludes, just as well.1 Wu’s essay does not really indulge in the romantic tropes about cyborg robot overlords, nor does he seem to hold a conceit about the superiority of networked technologies. He does not promise, for example, anything similar to Mark Zuckerberg’s prophecy to Congress in spring 2018 that artificial intelligence would soon cure Facebook of its failings in content moderation.2 To the contrary, Wu here is sober about the private administration of consumer information markets. After all, he has been among the most articulate proponents of positive government regulation in this area for almost two decades. The best we can do, Wu argues, is create hybrid approaches that carefully integrate artificial intelligence into the content moderation process.3 But in at least two important ways, Wu’s essay masks important challenges. First, by presuming the inevitability of automated decisionmaking systems in online companies’ distribution of user-generated content and data, Wu obscures the indispensable role that human managers at the Big Tech companies have in developing and selecting their business designs, algorithms, and operational techniques for managing content distribution.4 These companies deploy these resources to further their bottom-line interests in enlarging user engagement and dominating markets.5 In this way, social media content moderation is really only a tool for achieving these companies’ central objectives. Wu’s essay also says close to nothing about the various resources at work “behind the screens” that support this commercial mission.6 While he recognizes that tens of thousands of human reviewers exist, for example, Wu downplays the companies’ role as managers of massive transnational production lines and employers of global labor forces. These workers and the proprietary infrastructure with which they engage are invaluable to the distribution of user-generated content and data. Second, the claim that artificial intelligence is eclipsing law is premature, if not just a little misleading. There is nothing inevitable about the private governance of online information flows when we do not yet know what law can do in this area. This is because courts have abjured their constitutional authority to impose legal duties on online intermediaries’ administration of third-party content. The prevailing judicial doctrine under section 230 of the Communications Act (as amended by the Communications Decency Act)7 (section 230) allows courts to adjudicate the question of intermediary liability for user-generated content when the service at issue “contributes materially” to that content.8 This is to say that the common law has not had a meaningful hand in shaping intermediaries’ moderation of user-generated content because courts, citing section 230, have foresworn the law’s application. Defamation, fraud, and consumer protection law, for example, generally hold parties legally responsible for disseminating unlawful information that originates with third parties. But, under the prevailing section 230 doctrine, powerful companies like Facebook, Google, and Amazon do not have any legal obligation to block or remove user-generated content that they have no hand in “creat[ing]” or “develop[ing].”9 This is a standard that requires a substantial amount of involvement on the part of online companies to justify liability. This is why it is not quite right to say, as Wu does here, that we are witnessing the retreat of judicial decisionmaking in this setting. There has never been the chance to see what even modest run-of-the-mill judicial adjudication of content moderation decisions looks like since Congress enacted section 230 over twenty years ago. The view of online content moderation that Wu advances here is pristine. Its exclusive focus on the ideal Platonic form of speech moderation resonates with the view that the internet can be an open and free forum for civic republican deliberation.10 In this vein, he appeals to the healthy constitutional skepticism in the United States about government regulation of expressive conduct. One might associate his arguments here with other luminaries who have proposed that we use communication technologies to create opportunities for discovery and progress.11 In any case, by presenting the issue of content moderation as a battle between human adjudication and artificial intelligence, Wu’s essay here fails to identify the industrial designs, regulatory arrangements, and human labor that have put the Big Tech companies in their position of control. It does not really engage the political economy and structural arrangements that constitute and condition online content moderation. I generally admire and subscribe to Wu’s various accounts and critiques of the networked information economy. He is a clear and eloquent spokesperson for why positive procompetitive regulation and consumer protection in communications markets are vital to the operation of democracy. I, therefore, take his recent essay, and its relatively light touch on the Big Tech companies’ content moderation choices, as being addressed to whom he says it is addressed: the designers of these new hybrid processes. In contrast, this Response is addressed to policymakers and reformers: the very people whom Wu has inspired with his other writing. I offer this caveat to say that Wu and I may not actually have a disagreement as a matter of substance. I will just use this generous opportunity to respond to his essay by identifying the reasons we cannot afford to turn away from the lived political economy that shapes our networked world.
