Research Review

Disinformation and the Business of the Consumer Internet

Introduction

The 2016 presidential election in the United States stood to be historic—and historic it was, albeit in part for the wrong reasons. Donald Trump’s rise to power was unique, and in time the public would witness the dropping of one bombshell after the next concerning the circumstances around the election; a steady flow of revelations around Russian election interference in the lead-up to November 2016 ate away at Trump’s initial apparent triumph over opposing candidate Hillary Clinton. Much of the voting public experienced both the election interference, and the backlash, over social media and internet platforms. The truth about election interference seemingly had to be pried from the industry: only with the threat of serious congressional inquiry culminating in a 2017 hearing did it emerge that Russian state-controlled disinformation operators had infiltrated leading American social media networks operated by Meta (formerly Facebook), Google, and Twitter.[1]

The demand for new regulation targeting industry practices in the sector—with an eye toward shielding against the foreign disinformation problem—came in force quickly after the congressional inquiry, perhaps most notably through the introduction of the nominally bipartisan Honest Ads Act. But neither this bill nor any other fundamentally reformative measures came under serious consideration. Nevertheless, there has been a veritable tide of new ideas for advancing inquiry and knowledge concerning the regulation of social media platforms to contain the disinformation problem. These calls have only intensified in response to the events of 2021. The attempted insurrection at the US Capitol Building in Washington in January, which was largely coordinated through social media, spurred calls for making platforms more responsible for content on their sites. In fall 2021, a whistleblower leaked documents showing that Meta was well aware of real-world harms enabled by Facebook and Instagram, despite public statements to the contrary; in response, legislators have pushed for changes in laws to allow platforms to be liable for third-party content posted on their sites. In December 2021, a bill was introduced in the US Senate to allow researchers access to social media data. While it is unclear whether any of these bills will become law, it is clear that regulators and legislators are seeking ways to change the status quo.

The purpose of this paper is to outline the nature of the regulatory environment that gave rise to the business model that is commonly in place in the social media sector today. First, I will connect that business model to the social, political, and economic harms of disinformation and  outline some of the reforms that have been suggested by scholars of law, economics, technology and media. Second, I will offer new thinking on the extent to which various forms of regulatory intervention might shape American democracy in the long run.

The social problems that have spread over the leading internet platforms are driven by more than just ineffective policies. A large body of research indicates that these ills are products of the economic structures that define the consumer internet industry today, stemming from its uninhibited collection of data and the way it uses algorithms to manipulate users’ media experiences. As we step back and view this body of work, a unifying theory of the sector emerges: that the platforms’ underlying business models and corresponding revenue streams enable and encourage the unchecked spread of mis- and disinformation, as DePaula et al (2018) Ghosh and Scott (2018), and Flew, Martin, and Suzor (2019) among others, have noted. By necessity, the platforms’ efforts to self-regulate their systems are hobbled (at best) from the outset. The question then becomes: what effective external regulations can governments and societies best impose?

Technological circumstances in social media are driven by the regulatory environment

The American economic system has traditionally been openly and radically capitalist—and the United States government took this very approach at the outset of the commercial internet (Greenstein 2016). Tim Berners-Lee developed the world’s most popular web protocol in 1989, one that continues to define consumer use of the internet more than three decades later (Berners-Lee 1999). This ultimately enabled an explosion in global digital, web-connected communication—though the impact of the protocol was not as pronounced and precipitously adopted in the immediate wake of its invention as it was by the mid-nineties and beyond: it was only then that new kinds of internet business models began to come to the fore (Gozzi 2001; Beranek 2007). Rich, dynamic growth in the industry was strongly encouraged by the federal government. President Clinton’s administration, with Vice President Al Gore an active evangelist for the expansion of the internet, promoted its adoption and aggressive growth (Wiggins 2000). This policy stance was both socially and economically sensible for the time, particularly as this early period of the commercial internet was set against the backdrop of a lengthy period of enormous economic stability and success for the nation. There was some strong pushback against this approach to economic design for the internet (see, e.g., Drake 1995); for example, many regarded the Digital Millennium Copyright Act of 1998 (DMCA) as an example of regulatory policy that overly favored media properties and copyright holders (Gillespie 2007). Yet on the whole, leaders in American government, academia, and industry perhaps justifiably saw the internet as a tremendous new economic vertical that carried the potential not only to create new business and communication opportunities, but to change the world (Earl et al. 2010; Amor 2001; Tuomi 2006; Barlow 2000).

A bevy of new policy initiatives in the 1990s created regulatory circumstances that invited yet more innovation and new business growth over the internet. The High Performance Computing and Communication Act of 1991—or the “Gore Bill” as a shorthand, given the then-senator’s work to advance it through Congress—designated $600 million toward the development of high-performance computing technologies and the creation of the National Research and Education Network, the latter of which was designed to bring technical stakeholders together and align on advancing networking standards (Kleinrock 2004). The Telecommunications Act of 1996 was the first step in establishing the physical infrastructure underlying the consumer experience of the internet. Its establishment necessitated a “duty to deal” (Department of Justice 2015), a regulatory progression that increased competition and the pace of innovation in internet services, including through stipulations regulating interconnectedness and wholesale access to incumbent networks (Bruning 1996; Aufderheide 1999). The Communications Decency Act of 1996—which constituted title V of the 1996 Telecommunications Act—included a portion titled Section 230, which gave the providers of “interactive computer services” a liability shield for the hosting, dissemination, or takedown of any user-generated content (Ardia 2010). The legislation established protection from liability for service providers who (1) decided to transmit or otherwise carry user-generated content or (2) decided to take down or censor certain types or instances of user-generated content. Coverage of firms under the law was construed to include companies operating over the internet, including the leading search engines of the time like Yahoo! and AltaVista. Meanwhile, new developments in copyright law, including through the DMCA, cleared the path such that these early (and later) search engines as well as the growing number of social media firms could take advantage of expanded jurisprudential interpretations of fair use to seed their business model with content.

By the 2000s, the internet was thus positioned to grow in the directions favored by this neoliberal mode of regulatory policy (see, e.g., Cohen 2019)—and new business models began to emerge as computing capabilities continued to advance along the pace predicted by Moore’s Law (Schaller 1997). This was most salient for the growth of the commercial internet in two specific areas—namely, data storage capacity and processing power. At a certain stage, the combination of these two technologies and the underlying connectivity offered by the World Wide Web drove the industry past a threshold, eventually establishing the cost-effectiveness of new business models over the internet. Set against the backdrop of regulatory circumstances encouraging open innovation and aggressive growth, companies discovered new capacities to explore cost models and other viable business frameworks (Miguel and Casado 2016), thus enabling the rapid commercialization undertaken by internet entrepreneurs and fueled by the investment community.

The business models that won the internet

Many business models have been tested over the internet, but two have prevailed: targeted advertising and subscription. The latter involves charging users a subscription fee in exchange for access to a service (Wang et al. 2005). The former involves targeting ads—based on a user’s estimated behavioral profile—at the user (Chandra 2009).  There is a pronounced difference between the kinds of internet-based services that follow the targeted advertising route to monetization versus those that follow the subscription model (Ghosh 2019a).  It should be noted, however, there is a spectrum at play, with some companies exhibiting subscription as a business model and others embracing targeted advertising, with varying levels of surveillance and microtargeting among them.

Those web-based services that implement subscriptions—and survive using that scheme—tend to lease or sell access to some form of tangible product or service that can be easily translated into value in the real world. Two categories of tangible products and services are (1) digital intellectual property and (2) provision of basic physical services. The first category includes such services as Spotify, the New York Times’ digital business, and Netflix. Each of these owns or accesses intellectual property under various conditions, and extracts subscription fees in exchange for limited access (given the circumstance that the IP is typically available online only, with no rights to download) to that intellectual property. The second includes services like Amazon Prime, Uber, and the Latin American delivery service Rappi. It should be noted that there is some inevitable overlap between these two categories; Amazon Prime offers access to intellectual property and to basic services (such as free shipping on product orders executed through Amazon). The companies that fall into this category typically do not monetize their ongoing relationship with users itself, though they may well collect information on users with machine-learning models. For example, Amazon, Netflix, or Pandora might infer what products you might wish to purchase in the future, what television series you might wish to watch next, or what types of music you enjoy the most. While these can be seen more as efforts to perfect their products than as attempts to keep the user engaged on the platform, there is not consensus on this point. Arguments that many firms in the internet industry have monetized dialogical relationships with consumers through individual data have increased in number in recent years, most notably through the concept of surveillance capitalism (Zuboff 2019), though this idea has been received critically by some (Doctorow 2020).

Those web-based services that implement targeted advertising, a process by which users are subjected to digital advertisements based on inferences about their behavioral profiles, tend to engage in a dialogical relationship with end users over time. Consider Instagram, the social media network owned and operated by Meta. Instagram collects information about users through their on-platform engagement, makes inferences about the end user’s individual personality, and injects targeted advertisements into the user’s in-app experience (e.g., in both the social feed and the story feed) designed to further engage the user. These platforms are “dialogical” in the sense that they engage in an ongoing, fine-tuned, and sophisticated dialogue with the individual user. The user typically has desires, preferences, a belief system, likes, dislikes, routines, and behaviors; the platform service can begin to assert and attribute to the user a certain mapping over these categories—to the level of granularity at which each individual user might have a unique behavioral profile based on potentially millions of data-driven measures gathered by the platform. The dialogical platform—whether Instagram, Facebook, Google Search, YouTube, Snapchat, TikTok, or another—will then use this behavioral profile to engage in an ongoing “dialogue” with the user, feeding the user content the user will ideally find maximally engaging.

Dialogical platforms differ from subscription ones in terms of their ability to monetize engagement. Incremental engagement enables the gathering of ever more behavioral insights about the user and, in turn, greater potential attention from the user to convert into targeted advertising space. The ability to target ads contributes directly to the value proposition of the dialogical firm, whereas collecting user data is seen as a cost for the subscription-based firm, which primarily generates direct revenue through user subscriptions (Schrader and Ghosh 2018).

Delving one level further into the business model of dialogical internet platforms reveals yet more. There is much that separates dialogical platforms from one another; the user interfaces and core functionalities of applications like YouTube, Facebook, Instagram, and TikTok are each distinctive in their own rights. And yet three features of these platforms are consistent across the four applications—and across the broader industry of consumer-facing dialogical internet platforms. When analyzed together, they present a positive feedback loop comprised of uninhibited data collection, algorithmically managed to curate content, and aggressive growth of the platform.

Dominant internet platforms engage in uninhibited data collection at the expense of user privacy. The first consistent component of the business model involves the collection of personal information and proprietary data on, in practice, an uninhibited basis (Houser and Voss 2018; Esteve 2017). Internet firms collect personal data to conduct behavioral profiling—in other words, to infer details about the consumer’s individuality. Such data is gathered from any source through which it is cost-effective for the firm to collect it. Due to advances in computing and storage, they may collect through many sources: on-platform engagement (Young 2014), off-platform behavior (Roosendaal 2010), precise geolocation information from sources of varying granularity, including GPS signals (Chow 2013), end-user device details (Whittaker 2019), transaction information (Bergen and Surane 2018), mobile ecosystem usage data (Constine 2019; Nield 2019), and more. The data are typically compiled in profiles on individuals to infer as detailed a view of the consumer as possible. While such data collection may be invasive and raise privacy concerns, researchers have found that consumers of varying demographic backgrounds may perceive privacy harms in varying ways (Quinn, Epstein, and Moon 2019)—a possibility that should be kept in mind in developing policy.

Internet firms develop and maintain highly sophisticated media-manipulating algorithms that curate content for and target ads at their users. Dialogical internet firms typically use machine learning to analyze the corpus of data on a given end user to develop perspectives on their preferences, beliefs, and behaviors so as to generate a behavioral profile on the user. Machine learning algorithms are also used, however, to curate content and target advertising in the digital social experience of the user. Content curation entails an analysis of the universe of content that could be shown to the user in the context of the application interface and the subsequent calculation of metrics—or, put differently, signals—that indicate the likelihood that the user might engage with the content (Deibert 2019). Meta researchers have referred to this as meaningful social interaction (Litt et al. 2020). An effectively ranked feed, from the platform’s perspective, can keep the user maximally engaged such that the user’s continued use of the platform generates (1) an exhaust of behavioral data that can be used by the firm to even more effectively rank the social feed and (2) more advertising space in which to feature aggregated display advertising and thus rake in direct revenues. Ad targeting, meanwhile, is the process by which the firm algorithmically matches the targeting preferences of marketers with the users who might wish to engage with their targeted advertisements (Bhagwan and Sharp 2014). Audience categories on the platform are typically segmented in order to sell off their attention to the highest bidder in an open digital advertising exchange and marketplace. This process involves sub-processes, such as ad optimization and automated audience segmentation based on users’ behavioral profiles. Technology firms often take particularly questionable steps to further refine their engagement machine learning algorithms. For instance, Meta’s emotional contagion study (Kramer, Guillory, and Hancock 2014), has triggered further inquiry into the mechanisms by which firms can and should experiment in such ways with users (Hallinan, Brubaker, and Fiesler 2019).

Dominant internet firms engage in aggressive platform-growth tactics at the expense of would-be rivals and to the detriment of consumer markets in digital media. In parallel to efforts to engage users and keep them using the platform, dialogical platforms often engage in corporate development tactics to draw users to their platforms at the exclusion of the competition. Aggressive growth tactics include forcing existing Facebook users to download Messenger (Gibbs 2016); backtracking on publicly committed protective practices in the interest of aggregating data strongholds (Lomas 2016); linking services to integrate them as one company so as to avoid antitrust scrutiny (Lyons 2020); raising physical, commercial, and digital barriers to entry; copying competitor practices, often targeting smaller firms (Ghosh 2019a); performing such copying tactics or worse after inviting the copied firm to pitch a potential acquisition (Obear 2018); and explicitly engaging in anticompetitive actions to close off the potential for would-be rivals to fairly compete in the market segments in question (Devine 2008). While these practices may not be unique to businesses that operate over the internet, they are examples of the aggressive tactics employed by dialogical platforms to shut down the possibility of competition. Underlying these practices is a powerful network effect by which the value of the firm increases with the addition of each new marginal user who joins the platform service—a phenomenon that only serves to further strengthen the firm’s economic stranglehold over its market (Iacobucci and Ducci 2019).  These concerns and others have been raised in the context of recent antitrust suits in the United States against Google (Paxton 2020) and Meta (FTC 2020).

Why have dialogical consumer internet platforms universally opted to adopt this business model? This is an open question, but given that these firms operate in a radically open marketplace in which their business practices largely go unregulated by US authorities, firms gravitate toward the path to monetization that yields greatest margin—that is to say, that maximizes the differential between potential revenues and realized costs. Seen through another lens: it would be very difficult—perhaps impossible—to develop a social media network or other dialogical internet platform without following these core business practices, because no alternative organization would yield as great a potential to behaviorally engage users, and which in turn could generate the margins necessary to challenge the existing leading companies in the marketplace. It would not, in other words, be possible to effectively compete; any enterprise that attempts to enter the market would likely quickly diminish in the face of the current industry leaders.

Should it be the case that the prevailing business model for the sector has introduced new forms of economic, political, and social harm to society, however, policymakers would be behooved to consider reorientations of the marketplace through advancing earnest regulatory standards that encourage greater incentives to protect the rest of society.

The weaponization of modern digital infrastructures

That the business models of the dialogical internet platforms instigate societal harms is a nontrivial assertion. Some might suggest that the most challenging negative externalities society has faced are essentially content-related concerns, and that rather than addressing the business models of dialogical firms, what is required is simply to provide greater incentives for the technology industry to more effectively moderate content. Where, then, is the harm to the information environment stemming from these business models—and what causes the alleged tendency for mis- and disinformation to spread?

Human attention is limited; this observation offers an initial framing through which to analyze the above question. When taken in the aggregate, consumers have a limited amount of time in the day, and an even more limited amount of that time to devote to discretionary media consumption (Neumann 2016). Meanwhile, in media contexts, people have a propensity toward engagement; when we chance upon media content that triggers emotions in us, we might wish to consume more of the same kind of content—which some researchers have termed, in the context of YouTube, a tendency to enter a “rabbit hole” (Kaiser and Rauchfleisch 2019).

This propensity among users of social media and dialogical platforms to look for more and more engaging content is particularly problematic in the context of misinformation, which, as researchers at the Massachusetts Institute of Technology have found, travels faster and further than the truth (Vosoughi, Roy, and Aral 2018). The implication is apparent: the truth, including accurate reporting of the news, is often less extraordinary than falsehoods, particularly as the designers of falsehoods, intentional or not, are at liberty to formulate content that is extreme in nature. This suggests that, barring platform, governmental, or another form of intervention, falsehoods and so-called fake news—a term at which many researchers of mis- and disinformation bristle given its inherent vagueness, implicit inaccuracies, and other shortcomings (Goldberg 2018)—will necessarily have great potential to drive impact on dialogical platforms (Pariser 2011).

This, in part, is the phenomenon that top-down disinformation operators exploit today: identify through data analysis the thin cracks in the American voting population, and shower those thin cracks with political falsehoods and conspiracies. These cracks widen as more and more people in target audiences share and re-share engaging false information. Disinformation operators can then watch as the cracks begin to rip at the fabric of society, engendering increased polarization, hate, conspiracy, and resulting political action (Benkler, Faris, and Roberts 2018; Arsenault 2020).  Indeed, the 2016 Trump campaign—though perhaps it cannot be accused of directly spreading disinformation—used artificial intelligence to generate wide ranges of similar content and tested it with different communities, eventually doubling down on the combinations of content and audience segment that resonated most with the target community (Marantz 2020). There is tremendous incentive, given the low cost of advertising-based disinformation campaigns, for disinformation operators to exploit this system. This includes the fact that as ads are shared by the viewer, they become organic content on certain platforms, resulting in free engagement and influence for the political operator (Dommett and Power 2019).

The novel negative externalities (such as disinformation, hate speech, and polarizing content) generated by dominant digital platforms are not limited to the disinformation problem, but further perpetuate the impact of hate speech, algorithmic bias, and enable the spread of violence and terrorism, which disproportionally impact marginalized populations (DePaula et al. 2019). Some have suggested that the business model underlying dialogical platforms should not be the direct subject of regulation—a contention that aligns with the economic tradition in the United States to avoid industrial policy and enable the market to innovate. This being said, consumer-rights-based protections may accomplish much in diminishing the negative externalities generated by social media networks; such approaches might focus not on the positive topology of the business practices of a given firm expressed altogether as a monolithic business model, but rather the negative space, expressed through the desires of the rest of society, including end consumers.

Recent policy concerning digital platforms

The condition of today’s media ecosystem has caused politicians, advocates, and consumers alike to argue that the regulatory regime that applies to the dialogical platforms must be modernized to reflect problems of concentrated abusive market power, consumer privacy invasion, the lack of public transparency into the operation of what some view as public goods, the imposition of unfair algorithmic processes on marginalized classes of the population, and more (Ghosh and Scott 2018; Flew, Martin, and Suzor 2019). This in turn has instigated some action from the internet industry, ranging from statements of rhetoric to concerted action to structurally modify internal operations and policy decision-making processes—though many have criticized the industry for failing to do enough to effectively contain certain harms (Bay and Fredheim 2019). Others have also pointed to the challenges that leading internet platforms face in their efforts to contain potential harms related to political advertising (Gillespie 2018; Kreiss and McGregor 2019). Here I outline some of the most substantive changes that have been instituted by various members of the industry.

  • Increased sophistication in detection AI models. Major internet platform firms have accelerated the development of artificial intelligence trained to moderate content (Perry 2020). These systems take advantage of sophisticated machine-learning models; by using a corpus of training data, firms can help models learn to assess what forms of content are legitimate and what constitutes disinformation or harmful misinformation—and those models can subsequently flag the content for human review.
  • Expansions in content moderation staff. Human review is typically conducted through a hierarchical internal corporate process (Klonick 2018). Content policies are developed through cross-functional, principled, and high-level internal deliberations; various conditions and considerations over those principles are developed by expert staff; and those policies are executed by content moderators trained to deliver judgements in accordance with the high-level policies developed at a more senior point in the corporate hierarchy. Recent research has shown the human cost of content moderation; workers who moderate platforms view violent or otherwise disturbing content daily, which takes a considerable emotional toll (Gray and Suri 2019; Roberts 2019; Barrett 2020).
  • Temporary staffing operations focused on critical events. Internet platforms often stand up internal teams to take direction from senior policy officials and prevent the spread of malicious mis- and disinformation—as in the case of Meta in the lead-up to the 2018 midterm elections in the United States (Chakrabarti 2018).
  • News-related initiatives. Meta and Google have each designated millions of dollars to directly support the news industry through the Journalism Project and News Initiative, respectively. Part of the stated motivation for the development of these projects was to support local news, advance investigative journalism, enhance journalistic quality, and improve the quality of information available online. Some media scholars have, however, suggested these initiatives do not sufficiently counterbalance the economic activity that digital platforms have drawn away from traditional journalistic outlets.
  • Crowdsourcing of information. Internet platforms have variously considered and in certain cases crowdsourced user-provided signals concerning the validity and truth of instances of content. Some of these signals have been used to support internal and third-party fact-checking operations, with some of this fact-checking resulting in moderation flags applied to the offending content (Shu et al. 2017).
  • The Facebook Oversight Board. Facebook’s Oversight Board, informally described by some as the company’s own supreme court, is a body of third-party legal and policy experts who are empowered by the firm to offer judgements on content takedown controversies that are raised to the board by the company. The board is nominally independent of the company, and though it is largely designed to tackle questions of hate speech occurring on the company’s platforms, the board is equipped to deliver open, non-binding policy recommendations to the company (Ghosh 2019b). The Oversight Board’s most notable case to date involves the deplatforming of former president Donald Trump. In spring 2021, the Board upheld Meta’s decision to ban Trump from the platform; however they ruled that the suspension could not be indefinite, and that the company will have to revisit the decision. The case has generated broader discussion of the power inherent to mainstream social media platforms over influencing political discourse through corporate decision-making (Financial Times 2021).
  • Free speech rhetoric. Internet firms have variously made statements concerning the nature of the information environment, with the most notable statements made by Meta chief executive Mark Zuckerberg, who has stated that he is in favor of a free speech-oriented policy approach for the company’s platforms—a position that many have heavily criticized (Bowers and Zittrain 2020). It bears repeating that in the US, First Amendment protections of free speech only apply to actions by the government. The privately owned internet platforms can restrict or permit content as they see fit—this is in fact what Section 230 intended. Indeed, some legal and media scholars have suggested that the norms of free speech should be renegotiated through legislative and corporate reforms to protect against offending content such as disinformation (Goldenziel and Cheema 2019; Manzi 2019)
  • Advertising policy changes. Many have suggested a link between the capacity for marketers to engage in microtargeting of voting segments of the population and the spread of the problem of coordinated disinformation. Broader conclusions have been drawn, too, that microtargeting damages the democratic political process even when it is used in more legitimate ways by political campaigns—much like the Trump campaign did in the lead-up to the 2016 presidential election (Ribeiro et al. 2019; Ghosh 2019c). Twitter has been particularly forceful in recognizing this harm—and leaving large potential revenues on the table—with chief executive Jack Dorsey not only suggesting that micro-targeting presents tremendous political concerns, but going so far as to commit to ban all political advertising on Twitter (Twitter, n.d.).

Policy responses to the disinformation problem

Though the global community has been active in developing policy to combat the disinformation problem (Bradshaw, Neudert, and Howard 2018), some of that progress does not come without its criticisms; some argue, for instance, that certain content-related measures, like the German Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken—or NetzDG—should be reformed. Meanwhile, the American regulatory and policy making community has thus far been quite limited in advancing meaningful policies to push back against the threats of mis- and disinformation. While attempts have been made by various coalitions to advance reformative measures, few have succeeded. That being said, numerous scholars and policymakers have outlined structural measures that should be undertaken to develop a novel regulatory regime for the technology sector (Flew, Martin, and Suzor 2019; Gorwa 2019; Warner 2018; Crilley and Gillespie 2018). Some have argued specifically for new norms in media regulation to combat the negative externalities generated through dominant internet platforms (Kornbluh and Goodman 2020; Bechmann 2020; Baade 2018; Napoli 2019; Mosco 2018; Ghosh and Scott 2018). Livingstone (2019) has explored questions of how audiences have traditionally been addressed through regulation, and what implications these prior findings may have for research into media regulatory policy in the way forward. Janowski, Estevez, and Baguma (2018) have offered a number of ideas for structuring the relationship between citizens, corporations, and government in the digital sphere. Zuckerman (2020) has argued for the creation of a digital public infrastructure to introduce a plurality of platforms that can offer more options for consumers in the digital ecosystem. Russell (2019), meanwhile, has argued that the state of the public sphere today has instigated new threats to journalism. The measures proposed by researchers and policymakers to address mis- and disinformation generally fall into the following categories:

  1. Speech- and content-related policies. The spread of offending content—including disinformation and hate speech—has raised the ire of people and politicians on both the left and right of the political spectrum. In response, policymakers—including former President Trump—have suggested that internet firms should bear more responsibility for the content that spreads over their networks. Section 230 of the Communications Decency Act protects internet platforms from liability for either carrying or taking down user-generated content, and has been the target of both Democrats (Kelly 2020) and Republicans (Trump 2020). While the only related legislation that has passed in recent years over content-related concerns was the FOSTA/SESTA bill (ProPublica, n.d.), there have been more efforts to advance legislation targeting Section 230 (Thune 2020). Various policy experts, scholars, and technologists have further suggested that Congress should consider carve-outs to Section 230 for ads (Bergmayer 2019), for content that has gone viral, for civil rights violations, or for content that appears in algorithmically manipulated streams. Hwang (2017) has explored questions of amending Section 230 with an eye to containing the disinformation problem. Citron and Wittes (2017) have made suggestions around developing a “reasonable effort” standard, a measure that would seek to hold technology firms accountable to commitments they have publicly made regarding the containment of offending content such as hate speech. Keller (2019) has offered a number of ideas for regulatory policy concerning platform content moderation. Siekierski (2019), meanwhile, has explored mechanisms by which the Canadian government could diminish the impact of malicious synthetic content operations such as the dissemination of deepfakes.
  2. Transparency surrounding content. The Honest Ads Act, introduced in the aftermath of the 2016 presidential election in light of evidence of Russian disinformation operations, would stipulate new transparency measures for political advertising in digital media contexts (Warner 2017). Under current circumstances, social media firms and other internet platforms have limited regulatory requirements to be transparent about where political advertising content came from and who was responsible for disseminating it. The Honest Ads Act and related policies that have been proposed by various scholars and policy experts, however, would renegotiate this situation, introducing stipulations that internet platforms featuring political advertising be transparent about the genuine provenance of the advertisements and the entities responsible for funding political ad campaigns on digital networks. These are among other suggested measures that would require public-interest application programming interfaces (APIs) and publicly available databases featuring all political ads shown to users in a given election or social context (Wheeler 2017). Access for researchers has proven contentious; for example, in summer 2021 Meta blocked scholars who researched ad transparency and misinformation on Facebook. Some in the policymaking community have suggested that the focus on political advertising is not enough—that such transparency should be imposed on all forms of targeted advertising over digital media platforms (Edelson et al. 2021). Others have suggested that such transparency should be imposed on more than just advertising—extending to all decision-making algorithmic applications (Ghosh 2019c), or all applications of platform content policy (Wood and Ravel 2017). Many have argued that transparency alone, however, cannot resolve the broader problems of the modern digital ecosystem, and that imposition of corporate transparency on internet firms can only represent a start to an earnest reform agenda (Goodman and Wajert 2017). Sridhar (2019) comprehensively explores the problems instigated by the corporate application of machine intelligence.
  3. Broader digital reform efforts. Policymakers—including regulatory authorities and legislators alike—in jurisdictions around the world have given serious consideration to digital consumer rights reforms that could serve to rebalance the distribution of power from the internet industry to consumers. Such agendas have variously been designed to tackle matters of consumer privacy and market competition alike—with the most significant reform in effect to date being the European Commission’s General Data Protection Regulation, a sweeping set of regulatory stipulations on commercial providers over their data collection and use practices. Europe has also led the world in enforcement of market competition. The commission has advanced major fines against some of the biggest internet firms for using their alleged respective monopoly positions over subsectors of the consumer internet as a bottleneck to drive prices high and prioritize their own products. These tactics could, under classical competition regulatory analysis, artificially drive prices in consumer markets up to monopolistic rates, deaden the pace of market innovation across the digital ecosystem, and diminish the quality of services rendered to end consumers. Further reforms are now under consideration in jurisdictions like the United States, United Kingdom, and India—with developments like the Competition and Markets Authority’s (2019) report, the Stigler report (Zingales, Rolnik, and Lancieri 2019), and the US House Judiciary Committee’s (2020) antitrust hearing moving the reform discussion forward. Some have suggested that dominant digital platforms exhibited powerful network effects and have naturally raised barriers to entry, suggesting in turn that the appropriate remedy must involve integrating utility regulation theory (Iosifidis and Andrews 2019; Simons and Ghosh 2020). Sitaraman (2020) has argued there is a national security case for suggesting the break-up of dominant internet firms. Policymakers have meanwhile further advanced measures to promote counter-propaganda efforts, including through the Countering Foreign Propaganda and Disinformation Act that was signed into law during the Obama administration, thus establishing the State Department’s Global Engagement Center (Carr 2017); subsequent analysis has indicated that such measures may be needed to contain certain harms in the digital media age (Hall 2017).

It bears keeping in mind that a regime of regulated capitalism for the digital economy is of interest particularly in democracies—and that other forms of internet governance may be prioritized by governments and other stakeholder communities in other societies (Chin 2019). Scholars have acknowledged relatedly that competing governance norms may put at risk citizens’ human rights—including in the contexts of governmental media censorship and use in political systems like those of Myanmar (Lee 2019), Russia (Nocetti 2015), Saudi Arabia (Pan and Siegel 2019) and China (King, Pan, and Roberts 2013).

New directions for regulation

Many of the existing discussions about new regulation for the technology industry are clearly disconnected or otherwise under-coordinated amongst the jurisdictions, policymakers, independent scholars, and technology experts who are working to advance them. Such proposals in the United States range from baseline privacy legislation (Kerry and Chin 2020) that would protect all consumer data held by commercial entities by default, to new lawsuits put forward by state attorneys general and federal regulators to break up or otherwise impose fines and penalties on the likes of Meta (FTC 2020) and Google (Barr 2020). Much of the division among policymakers has been driven by political differences; on the topic of content moderation, for example, liberals and conservatives have variously pushed for remedies at opposite ends of the political spectrum.

It is emerging, however, that the social problems that have spread over the leading internet platforms are driven by more than just ineffective policy; these problems are instigated by the economic structures that define the consumer internet industry today—an industry governed by the uninhibited collection of data and use of algorithms to derive profiling insights from that data and manipulate the media experience. A unifying theory of the sector is emerging: that the very way in which the dominant digital platforms are designed enables and encourages the spread of mis- and disinformation and other forms of harm. Given such a circumstance, policymakers will increasingly be behooved to consider ways to fight back against the information problem by formulating policies that rebalance the distribution of economic power between the firm and the individual consumer.

[1] As we have noted elsewhere, there is no direct evidence that foreign influence operations had a measurable effect on the 2016 election results.

Works Cited

Amor, Daniel. 2001. Internet Future Strategies: How Pervasive Computing Services Will Change the World. Prentice Hall PTR.

Ardia, David S. 2010. “Free Speech Savior or Shield for Scoundrels: An Empirical Study of Intermediary Immunity under Section 230 of the Communications Decency Act.” Loyola of Los Angeles Law Review 43 (2): 373–506.

Arsenault, Amelia. 2020. “Microtargeting, Automation, and Forgery: Disinformation in the Age of Artificial Intelligence.” University of Ottawa Research. http://ruor.uottawa.ca/handle/10393/40495.

Aufderheide, Patricia. 1999. Communications Policy and the Public Interest: The Telecommunications Act of 1996. Guilford Press.

Baade, Björnstjern. 2019. “Fake News and International Law.” European Journal of International Law 29 (4): 1357–76. https://doi.org/10.1093/ejil/chy071.

Barlow, John Perry. 2000. “The Next Economy Of Ideas.” Wired, October 1, 2000. https://www.wired.com/2000/10/download/.

Barr, William P. 2020. “Statement of the Attorney General on the Announcement Of Civil Antitrust Lawsuit Filed Against Google.” Department of Justice. October 20, 2020. https://www.justice.gov/opa/pr/statement-attorney-general-announcement-civil-antitrust-lawsuit-filed-against-google.

Barrett, Paul M. 2020. “Who Moderates the Social Media Giants? A Call to End Outsourcing.” NYU Stern Center for Business and Human Rights. https://bhr.stern.nyu.edu/tech-content-moderation-june-2020.

Bay, Sebastian, and Rolf Fredheim. 2019. How Social Media Companies Are Failing to Combat Inauthentic Behaviour Online. NATO STRATCOM Centre of Excellence. https://www.stratcomcoe.org/how-social-media-companies-are-failing-combat-inauthentic-behaviour-online.

Bechmann, Anja. 2020. “Tackling Disinformation and Infodemics Demands Media Policy Changes.” Digital Journalism 8 (6): 855–63. https://doi.org/10.1080/21670811.2020.1773887.

Benkler, Yochai, Robert Faris, and Hal Roberts. 2018. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford University Press.

Beranek, Leo L. 2007. “Who Really Invented the Internet?” Sound & Vibration. http://www.sandv.com/downloads/0701bera1.pdf

Bergen, Mark, and Jennifer Surane. 2018. “Google and Mastercard Cut a Secret Ad Deal to Track Retail Sales.” Bloomberg, August 30, 2018. https://www.bloomberg.com/news/articles/2018-08-30/google-and-mastercard-cut-a-secret-ad-deal-to-track-retail-sales.

Bergmayer, John. 2019. “Speech and Commerce: What Section 230 Should and Should Not Protect.” Public Knowledge. September 24, 2019. https://www.publicknowledge.org/blog/speech-and-commerce-what-section-230-should-and-should-not-protect/.

Berners-Lee, Tim. 1999. “Realising the Full Potential of the Web.” Technical Communication 46 (1): 79–82.

Bhagwan, Varun, and Doug Sharp. 2014. Techniques for reducing irrelevant ads. United States US20160189236A1, filed December 29, 2014, and issued June 30, 2016. https://patents.google.com/patent/US20160189236A1/en.

Bowers, John, and Jonathan L. Zittrain. 2020. “Answering Impossible Questions: Content Governance in an Age of Disinformation.” SSRN Scholarly Paper ID 3520683, Social Science Research Network. https://papers.ssrn.com/abstract=3520683.

Bradshaw, Samantha, Lisa-Maria Neudert, and Philip N. Howard. 2018. “Government Responses to Malicious Use of Social Media.” Countering the Malicious Use of Social Media. NATO STRATCOM Centre of Excellence. https://www.stratcomcoe.org/government-responses-malicious-use-social-media.

Bruning, Deonne L. 1996. “The Telecommunications Act of 1996: The Challenge of Competition Law and Technology Issue.” Creighton Law Review 30 (4): 1255–86.

Carr, Bradley M. 2017. “Joint Interagency Task Force – Influence: The New Global Engagement Center.” US Army War College.

Chakrabarti, Samidh. 2018. “Fighting Election Interference in Real Time.” About Facebook (blog). October 18, 2018. https://about.fb.com/news/2018/10/war-room/.

Chandra, Ambarish. 2009. “Targeted Advertising: The Role of Subscriber Characteristics in Media Markets.” Journal of Industrial Economics 57 (1): 58–84. https://doi.org/10.1111/j.1467-6451.2009.00370.x.

Chin, Yik Chan. 2019. “Internet Governance in China: The Network Governance Approach.” SSRN Scholarly Paper ID 3310921, Social Science Research Network. https://doi.org/10.2139/ssrn.3310921.

Chow, Raymond. 2013. “Why-Spy: An Analysis of Privacy and Geolocation in the Wake of the 2010 Google Wi-Spy Controversy Notes & Comments.” Rutgers Computer and Technology Law Journal 39 (1): 56–94.

Citron, Danielle Keats, and Benjamin Wittes. 2017. “The Internet Will Not Break: Denying Bad Samaritans Section 230 Immunity.” SSRN Scholarly Paper ID 3007720, Social Science Research Network. https://papers.ssrn.com/abstract=3007720.

Cohen, Julie E. 2019. Between Truth and Power. Oxford University Press.

Competition and Markets Authority. 2019. “Online Platforms and Digital Advertising Market Study.” GOV.UK. July 3, 2019. https://www.gov.uk/cma-cases/online-platforms-and-digital-advertising-market-study.

Constine, Josh. 2019. “Facebook Will Shut down Its Spyware VPN App Onavo.” TechCrunch (blog). February 21, 2019. https://social.techcrunch.com/2019/02/21/facebook-removes-onavo/.

Crilley, Rhys, and Marie Gillespie. 2018. “What to Do about Social Media? Politics, Populism and Journalism.” Journalism 20, no. 1 (December). http://journals.sagepub.com/doi/10.1177/1464884918807344.

Deibert, Ronald J. 2019. “The Road to Digital Unfreedom: Three Painful Truths About Social Media.” Journal of Democracy 30 (1): 25–39. http://muse.jhu.edu/article/713720.

Department of Justice. 2015. “Competition And Monopoly: Single-Firm Conduct Under Section 2 Of The Sherman Act : Chapter 7.” June 25, 2015. https://www.justice.gov/atr/competition-and-monopoly-single-firm-conduct-under-section-2-sherman-act-chapter-7.

DePaula, Nic, Kaja J. Fietkiewicz, Thomas J. Froehlich, A. J. Million, Isabelle Dorsch, and Aylin Ilhan. 2018. “Challenges for Social Media: Misinformation, Free Speech, Civic Engagement, and Data Regulations.” Proceedings of the Association for Information Science and Technology 55 (1): 665–68. https://doi.org/10.1002/pra2.2018.14505501076.

Devine, Kristine Laudadio. 2008. “Preserving Competition in Multi-Sided Innovative Markets: How Do You Solve a Problem Like Google.” North Carolina Journal of Law and Technology 10 (1): 59–118.

Doctorow, Cory. 2020. “How to Destroy ‘Surveillance Capitalism.’” OneZero (blog), Medium. February 4, 2021. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59.

Dommett, Katharine, and Sam Power. 2019. “The Political Economy of Facebook Advertising: Election Spending, Regulation and Targeting Online.” The Political Quarterly 90 (2): 257–65. https://doi.org/10.1111/1467-923X.12687.

Drake, William J., ed. 1995. The New Information Infrastructure: Strategies for U.S. Policy. Brookings Institution Press.

Earl, Jennifer, Katrina Kimport, Greg Prieto, Carly Rush, and Kimberly Reynoso. 2010. “Changing the World One Webpage at a Time: Conceptualizing and Explaining Internet Activism.” Mobilization: An International Quarterly 15 (4): 425–46. https://doi.org/10.17813/maiq.15.4.w03123213lh37042.

Edelson, Laura, Jason Chuang, Erika Franklin Fowler, Michael M. Franz, and Travis Ridout. 2021. “A Standard for Universal Digital Ad Transparency.” Occasional Papers. Knight First Amendment Institute at Columbia University. https://knightcolumbia.org/content/a-standard-for-universal-digital-ad-transparency.

Esteve, Asunción. 2017. “The Business of Personal Data: Google, Facebook, and Privacy Issues in the EU and the USA.” International Data Privacy Law 7 (1): 36–47. https://doi.org/10.1093/idpl/ipw026.

FTC (Federal Trade Commission). 2020. “FTC Sues Facebook for Illegal Monopolization.” Press release, Federal Trade Commission. December 9, 2020. https://www.ftc.gov/news-events/press-releases/2020/12/ftc-sues-facebook-illegal-monopolization.

Flew, Terry, Fiona Martin, and Nicolas Suzor. 2019. “Internet Regulation as Media Policy: Rethinking the Question of Digital Communication Platform Governance.” Journal of Digital Media & Policy 10 (1): 33–50. https://doi.org/10.1386/jdmp.10.1.33_1.

Ghosh, Dipayan. 2019a. “A New Digital Social Contract to Encourage Internet Competition” Competition Policy International, 11.

———. 2019b. “Facebook’s Oversight Board Is Not Enough.” Harvard Business Review, October 16, 2019. https://hbr.org/2019/10/facebooks-oversight-board-is-not-enough.

———. 2019c. “The Commercialization of Decision-Making: Towards a Regulatory Framework to Address Machine Bias over the Internet.” Hoover Institution. May 6, 2019. https://www.hoover.org/research/commercialization-decision-making-towards-regulatory-framework-address-machine-bias-over.

Ghosh, Dipayan, and Ben Scott. 2018. “Digital Deceit: The Technologies Behind Precision Propaganda on the Internet.” Policy paper, New America Foundation. http://newamerica.org/public-interest-technology/policy-papers/digitaldeceit/.

Gibbs, Samuel. 2016. “Why Is Facebook Trying to Force You to Use Its Messenger App?” The Guardian, June 6, 2016. http://www.theguardian.com/technology/2016/jun/06/facebook-forcing-messenger-app-explainer.

Gillespie, Tarleton. 2007. Wired Shut: Copyright and the Shape of Digital Culture. MIT Press.

———. 2018. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press.

Goldberg, David. 2018. “Responding to ‘Fake News’: Is There an Alternative to Law and Regulation?” Southwestern Law Review 47: 31.

Goldenziel, Jill I., and Manal Cheema. 2019. “The New Fighting Words?: How U.S. Law Hampers the Fight Against Information Warfare.” SSRN Scholarly Paper ID 3286847, Social Science Research Network. https://doi.org/10.2139/ssrn.3286847.

Goodman, Ellen P., and Lyndsey Wajert. 2017. “The Honest Ads Act Won’t End Social Media Disinformation, but It’s a Start.” SSRN Scholarly Paper ID 3064451, Social Science Research Network. https://doi.org/10.2139/ssrn.3064451.

Gorwa, Robert. 2019. “What Is Platform Governance?” Information, Communication & Society 22 (6): 854–71. https://doi.org/10.1080/1369118X.2019.1573914.

Gozzi, Raymond. 2001. “A BRIEF HISTORY OF INTERNET TIME.” ETC: A Review of General Semantics 58 (4): 470–76.

Gray, Mary L., and Siddharth Suri. 2019. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Illustrated edition. Boston: Mariner Books.

Greenstein, Shane. 2016. How the Internet Became Commercial: Innovation, Privatization, and the Birth of a New Network. The Kauffman Foundation Series on Innovation and Entrepreneurship. Princeton University Press. https://press.princeton.edu/books/hardcover/9780691167367/how-the-internet-became-commercial.

Hall, Holly Kathleen. 2017. “The New Voice of America: Countering Foreign Propaganda and Disinformation Act.” First Amendment Studies 51 (2): 49–61. https://doi.org/10.1080/21689725.2017.1349618.

Hallinan, Blake, Jed R. Brubaker, and Casey Fiesler. 2019. “Unexpected Expectations: Public Reaction to the Facebook Emotional Contagion Study.” New Media & Society, September. http://journals.sagepub.com/doi/10.1177/1461444819876944.

Houser, Kimberly A., and W. Gregory Voss. 2018. “GDPR: The End of Google and Facebook Or a New Paradigm in Data Privacy.” Richmond Journal of Law & Technology 25 (1): 1–109.

Hwang, Tim. 2017. “Dealing with Disinformation: Evaluating the Case for CDA 230 Amendment.” SSRN Scholarly Paper ID 3089442, Social Science Research Network. https://doi.org/10.2139/ssrn.3089442.

Iacobucci, Edward, and Francesco Ducci. 2019. “The Google Search Case in Europe: Tying and the Single Monopoly Profit Theorem in Two-Sided Markets.” European Journal of Law and Economics 47 (1): 15–42. https://doi.org/10.1007/s10657-018-9602-y.

Iosifidis, Petros, and Leighton Andrews. 2019. “Regulating the Internet Intermediaries in a Post-Truth World: Beyond Media Policy?” International Communication Gazette, February. http://journals.sagepub.com/doi/10.1177/1748048519828595.

Janowski, Tomasz, Elsa Estevez, and Rehema Baguma. 2018. “Platform Governance for Sustainable Development: Reshaping Citizen-Administration Relationships in the Digital Age.” Government Information Quarterly, Platform Governance for Sustainable Development, 35 (4, Supplement): S1–16. https://doi.org/10.1016/j.giq.2018.09.002.

Kaiser, Jonas, and Adrian Rauchfleisch. 2019. “The Implications of Venturing down the Rabbit Hole.” Internet Policy Review, June 27, 2019. https://policyreview.info/articles/news/implications-venturing-down-rabbit-hole/1406.

Keller, Daphne. 2019. “Platform Content Regulation – Some Models and Their Problems.” Center for Internet and Society. May 6, 2019. http://cyberlaw.stanford.edu/blog/2019/05/platform-content-regulation-%E2%80%93-some-models-and-their-problems.

Kelly, Makena. 2020. “Joe Biden Wants to Revoke Section 230.” The Verge, January 17, 2020. https://www.theverge.com/2020/1/17/21070403/joe-biden-president-election-section-230-communications-decency-act-revoke.

Kerry, Cameron F., and Caitlin Chin. 2020. “How the 2020 Elections Will Shape the Federal Privacy Debate.” Brookings (blog). October 26, 2020. https://www.brookings.edu/blog/techtank/2020/10/26/how-the-2020-elections-will-shape-the-federal-privacy-debate/.

King, Gary, Jennifer Pan, and Margaret E. Roberts. 2013. “How Censorship in China Allows Government Criticism but Silences Collective Expression.” American Political Science Review 107 (2): 326–43. https://doi.org/10.1017/S0003055413000014.

Kleinrock, Leonard. 2004. “The Internet Rules of Engagement: Then and Now.” Technology in Society, Technology and Science Entering the 21st Century, 26 (2): 193–207. https://doi.org/10.1016/j.techsoc.2004.01.015.

Klonick, Kate. 2017. “The New Governors: The People, Rules, and Processes Governing Online Speech.” Harvard Law Review 131 (6): 1598–1670.

Kornbluh, Karen, and Ellen P. Goodman. 2020. “Five Steps to Combat the Infodemic.” German Marshall Fund of the United States. https://www.gmfus.org/blog/2020/03/26/five-steps-combat-infodemic.

Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. 2014. “Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks.” Proceedings of the National Academy of Sciences 111 (24): 8788–90. https://doi.org/10.1073/pnas.1320040111.

Kreiss, Daniel, and Shannon C. McGregor. 2019. “The ‘Arbiters of What Our Voters See’: Facebook and Google’s Struggle with Policy, Process, and Enforcement around Political Advertising.” Political Communication 36 (4): 499–522. https://doi.org/10.1080/10584609.2019.1619639.

Lee, Ronan. 2019. “Extreme Speech| Extreme Speech in Myanmar: The Role of State Media in the Rohingya Forced Migration Crisis.” International Journal of Communication 13 (0): 22.

Litt, Eden, Siyan Zhao, Robert Kraut, and Moira Burke. 2020. “What Are Meaningful Social Interactions in Today’s Media Landscape? A Cross-Cultural Survey.” Social Media + Society 6 (3). https://doi.org/10.1177/2056305120942888.

Livingstone, Sonia. 2019. “Audiences in an Age of Datafication: Critical Questions for Media Research.” Television & New Media 20 (2): 170–83. https://doi.org/10.1177/1527476418811118.

Lomas, Natasha. 2016. “WhatsApp to Share User Data with Facebook for Ad Targeting — Here’s How to Opt out.” TechCrunch (blog). August 25, 2016. https://social.techcrunch.com/2016/08/25/whatsapp-to-share-user-data-with-facebook-for-ad-targeting-heres-how-to-opt-out/.

Lyons, Kim. 2020. “Facebook Begins Merging Instagram and Messenger Chats in New Update.” The Verge, August 14, 2020. https://www.theverge.com/2020/8/14/21369737/facebook-merging-instagram-messenger-chats-update.

Manzi, Daniela C. 2019. “Managing the Misinformation Marketplace: The First Amendment and the Fight Against Fake News.” Fordham Law Review 87. http://fordhamlawreview.org/issues/managing-the-misinformation-marketplace-the-first-amendment-and-the-fight-against-fake-news/.

Marantz, Andrew. 2020. “The Man Behind Trump’s Facebook Juggernaut.” The New Yorker, March 2, 2020. https://www.newyorker.com/magazine/2020/03/09/the-man-behind-trumps-facebook-juggernaut.

Miguel, Juan Carlos, and Miguel Ángel Casado. 2016. “GAFAnomy (Google, Amazon, Facebook and Apple): The Big Four and the b-Ecosystem.” In Dynamics of Big Internet Industry Groups and Future Trends: A View from Epigenetic Economics, edited by Miguel Gómez-Uranga, Jon Mikel Zabala-Iturriagagoitia, and Jon Barrutia, 127–48. Springer International Publishing. https://doi.org/10.1007/978-3-319-31147-0_4.

Mosco, Vincent. 2018. “Social Media versus Journalism and Democracy.” Journalism 20 (1): 181–84. https://doi.org/10.1177/1464884918807611.

Napoli, Philip M. 2019. Social Media and the Public Interest: Media Regulation in the Disinformation Age. Columbia University Press.

Neumann, Odmar. 2016. “Beyond Capacity: A Functional View of Attention.” In Perspectives on Perception and Action, edited by Herbert Heuer and Andries Sanders. Routledge. https://doi.org/10.4324/978131562799-24.

Nield, David. 2019. “All the Ways Google Tracks You—And How to Stop It.” Wired, May 27, 2019. https://www.wired.com/story/google-tracks-you-privacy/.

Nocetti, Julien. 2015. “Contest and Conquest: Russia and Global Internet Governance.” International Affairs 91 (1): 111–30. https://doi.org/10.1111/1468-2346.12189.

Obear, Josh. 2018. “Move Last and Take Things: Facebook and Predatory Copying Notes.” Columbia Business Law Review 2018 (3): 994–1059.

Pan, Jennifer, and Alexandra A. Siegel. 2020. “How Saudi Crackdowns Fail to Silence Online Dissent.” American Political Science Review 114 (1): 109–25. https://doi.org/10.1017/S0003055419000650.

Pariser, Eli. 2011. The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. Penguin.

Paxton, Ken. 2020. “AG Paxton Leads Multistate Coalition in Lawsuit Against Google for Anticompetitive Practices and Deceptive Misrepresentations.” Texas Attorney General. December 16, 2020. https://www.texasattorneygeneral.gov/news/releases/ag-paxton-leads-multistate-coalition-lawsuit-against-google-anticompetitive-practices-and-deceptive.

Perry, Tekla. 2020. “How Facebook Is Using AI to Fight COVID-19 Misinformation.” IEEE Spectrum, May 12, 2020. https://spectrum.ieee.org/view-from-the-valley/artificial-intelligence/machine-learning/how-facebook-is-using-ai-to-fight-covid19-misinformation.

ProPublica. n.d. “Recent Congressional Statements on ‘Sesta.’” https://projects.propublica.org/represent/statements/search?page=1&q=%22sesta%22.

Quinn, Kelly, Dmitry Epstein, and Brenda Moon. 2019. “We Care About Different Things: Non-Elite Conceptualizations of Social Media Privacy.” Social Media + Society, September. http://journals.sagepub.com/doi/10.1177/2056305119866008.

Ribeiro, Filipe N., Koustuv Saha, Mahmoudreza Babaei, Lucas Henrique, Johnnatan Messias, Fabricio Benevenuto, Oana Goga, Krishna P. Gummadi, and Elissa M. Redmiles. 2019. “On Microtargeting Socially Divisive Ads: A Case Study of Russia-Linked Ad Campaigns on Facebook.” In Proceedings of the Conference on Fairness, Accountability, and Transparency, 140–49. FAT* ’19. Association for Computing Machinery. https://doi.org/10.1145/3287560.3287580.

Roberts, Sarah T. 2019. Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven: Yale University Press.

Roosendaal, Arnold. 2010. “Facebook Tracks and Traces Everyone: Like This!” SSRN Scholarly Paper ID 1717563, Social Science Research Network. https://doi.org/10.2139/ssrn.1717563.

Russell, Adrienne. 2019. “‘This Time It’s Different’: Covering Threats to Journalism and the Eroding Public Sphere.” Journalism 20 (1): 32–35. https://doi.org/10.1177/1464884918809245.

Schaller, R.R. 1997. “Moore’s Law: Past, Present and Future.” IEEE Spectrum 34 (6): 52–59. https://doi.org/10.1109/6.591665.

Schrader, Dawn E., and Dipayan Ghosh. 2018. “Proactively Protecting Against the Singularity: Ethical Decision Making in AI.” IEEE Security Privacy 16 (3): 56–63. https://doi.org/10.1109/MSP.2018.2701169.

Shu, Kai, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. “Fake News Detection on Social Media: A Data Mining Perspective.” ACM SIGKDD Explorations Newsletter 19 (1): 22–36. https://doi.org/10.1145/3137597.3137600.

Siekierski, B. J. 2019. “Deep Fakes: What Can Be Done About Synthetic Audio and Video?” Library of Parliament, In Brief, Publication No. 2019-11-E. April. https://lop.parl.ca/sites/PublicWebsite/default/en_CA/ResearchPublications/201911E.

Simons, Josh, and Dipayan Ghosh. 2020. Utilities for Democracy: Why and How the Algorithmic Infrastructure of Facebook and Google Must Be Regulated. Brookings Institution. August 11, 2020. https://www.brookings.edu/research/utilities-for-democracy-why-and-how-the-algorithmic-infrastructure-of-facebook-and-google-must-be-regulated/.

Sitaraman, Ganesh. 2020. “The National Security Case for Breaking Up Big Tech.” Knight Institute at Columbia. January 30, 2020. https://knightcolumbia.org/content/the-national-security-case-for-breaking-up-big-tech.

Sridhar, V. 2019. “Regulation of Machine Intelligence.” In Emerging ICT Policies and Regulations: Roadmap to Digital Economies, edited by V. Sridhar, 265–88. Singapore: Springer. https://doi.org/10.1007/978-981-32-9022-8_13.

Thune, John. 2020. “Thune, Schatz Introduce Legislation to Update Section 230, Strengthen Rules, Transparency on Online Content Moderation, Hold Internet Companies Accountable for Moderation Practices.” US Senator John Thune. 2020. https://www.thune.senate.gov/public/index.cfm/2020/6/thune-schatz-introduce-legislation-to-update-section-230-strengthen-rules-transparency-on-online-content-moderation-hold-internet-companies-accountable-for-moderation-practices.

Trump, Donald. 2020. “Executive Order on Preventing Online Censorship.” Trump White House Archive. May 28, 2020. https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-preventing-online-censorship/.

Tuomi, Ilkka. 2006. Networks of Innovation: Change and Meaning in the Age of the Internet. Networks of Innovation. Oxford University Press. https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780199269051.001.0001/acprof-9780199269051.

Twitter. n.d. “Political Content.” Accessed September 25, 2020. https://business.twitter.com/en/help/ads-policies/ads-content-policies/political-content.html.

US House Judiciary Committee. 2020. “Judiciary Antitrust Subcommittee Investigation Reveals Digital Economy Highly Concentrated, Impacted By Monopoly Power.” October 6, 2020. https://judiciary.house.gov/news/documentsingle.aspx?DocumentID=3429.

Vosoughi, Soroush, Deb Roy, and Sinan Aral. 2018. “The Spread of True and False News Online.” Science 359 (6380): 1146–51. https://doi.org/10.1126/science.aap9559.

Wang, Cheng Lu, Yue Zhang, Li Richard Ye, and Dat-Dao Nguyen. 2005. “Subscription to Fee-Based Online Services: What Makes Consumer Pay for Online Content?” Journal of Electronic Commerce Research 6 (8).

Warner, Sen. Mark. 2018. “Potential Policy Proposals for Regulation of Social Media and Technology Firms.” https://www.warner.senate.gov/public/_cache/files/d/3/d32c2f17-cc76-4e11-8aa9-897eb3c90d16/65A7C5D983F899DAAE5AA21F57BAD944.social-media-regulation-proposals.pdf.

———. 2017. “The Honest Ads Act.” US Senator Mark R. Warner. 2017. https://www.warner.senate.gov/public/index.cfm/the-honest-ads-act.

Wheeler, Tom. 2017. “Using ‘Public Interest Algorithms’ to Tackle the Problems Created by Social Media Algorithms.” Tech Tank (blog), Brookings Institution. November 1, 2017. https://www.brookings.edu/blog/techtank/2017/11/01/using-public-interest-algorithms-to-tackle-the-problems-created-by-social-media-algorithms/.

Whittaker, Zack. 2019. “Facebook Collected Device Data on 187,000 Users Using Banned Snooping App.” TechCrunch (blog). June 12, 2019. https://social.techcrunch.com/2019/06/12/facebook-project-atlas-research-apple-banned/.

Wiggins, Richard. 2000. “Al Gore and the Creation of the Internet.” First Monday 5 (10). https://firstmonday.org/ojs/index.php/fm/article/download/799/708/4612.

Wood, Abby K., and Ann M. Ravel. 2017. “Fool Me Once: Regulating Fake News and Other Online Advertising.” Southern California Law Review 91 (6): 1223–78.

Young, Sean D. 2014. “Behavioral Insights on Big Data: Using Social Media for Predicting Biomedical Outcomes.” Trends in Microbiology 22 (11): 601–2. https://doi.org/10.1016/j.tim.2014.08.004.

Zingales, Luigi, Guy Rolnik, and Filippo M. Lancieri. 2019. Stigler Committee on Digital Platforms: Final Report. University of Chicago Stigler Center. https://www.chicagobooth.edu/research/stigler/news-and-media/committee-on-digital-platforms-final-report.

Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.

Zuckerman, Ethan. 2020. “The Case for Digital Public Infrastructure.” Knight First Amendment Institute. January 27, 2020. https://knightcolumbia.org/content/the-case-for-digital-public-infrastructure.