Social Science Research Council Research AMP Just Tech
Research Review

Producers of Disinformation

Disinformation and the 2020 US Election

Though we can trace problems of information disorder throughout human history, many theorists agree that online communication, social media, and changes in mainstream media production—together with decreasing trust in institutions—have led to fundamental differences in the ways humans transmit information, both true and false. Four years have passed since deceptive communication bubbled to the surface of the US political conversation, leading to a massive wave of scholarship about dis- and misinformation. In the final stretch of another US election cycle—this one overshadowed by the Covid-19 “infodemic”—what has changed? What questions have researchers managed to answer since our initial publication? What are the crucial issues we still do not understand? 

As with other issues in disinformation studies (and social science in general), it’s often hard to make definitive statements, especially about trends that we see playing out in front of us. Hard data are extremely difficult to come by for researchers outside the social media platforms, and it’s even harder to establish cause-and-effect by looking at people’s media consumption. For example, researchers are still hotly debating whether or not Russian social media operations had a measurable effect on the US 2016 election. There’s still no solid evidence that they did, even though, as Karpf (2019) notes, the secondary effects of our fears are perhaps more significant (see also Lim 2020). In 2020, intelligence officials and researchers have said Russia is again attempting to influence the US presidential election (Kim 2020; Goldman et al. 2020; Sanger and Barnes 2020). Kim found that a Russian campaign was using a mix of old and new strategies, but that its trolls had become more sophisticated in their mimicry of US domestic content and their use of nonpolitical material to conceal themselves. US intelligence officials have also warned that Iran and China may meddle in US politics to advance their interests, which are different from Russia’s (DNI 2020a, 2020b; see also Kirby 2020; and see Martin, Shapiro, and Ilhardt 2020 for a comprehensive dataset of foreign influence efforts). 

One of the major disinformation themes of 2020 relates to mail-in and absentee voting (OSoMe 2020). Such measures are essential to democracy due to the Covid-19 pandemic, but as with other aspects of the pandemic, they have become highly politicized. A variety of disinformation narratives have emerged from right-wing voices about mail-in voting or “ballot harvesting.” These narratives—amplified by prominent voices and media elites—cast doubt on the voting process itself (Kennedy et al. 2020; Garcia-Camargo et al. 2020; Benkler et al. 2020). Some observers have feared that they could contribute to the likelihood of an ugly contested election. Though the Covid-19 context is new, these narratives fit longstanding patterns of unsubstantiated claims of “voter fraud” coming from the political right, and support voter suppression efforts frequently aimed at “historically disenfranchised communities, including communities of color, low-income communities, and immigrant communities” (Vandewalker 2020), groups that often support Democratic candidates. “Voter fraud” stories also appear on networks of so-called “pink slime” journalism sites, which are conservative-backed outlets that masquerade as local sites, but feature algorithmically generated news and conservative platform perspectives (Bengani 2019, 2020). Black voters have been targeted by other “voter depression” efforts aimed at instilling apathy and discouraging people from voting (Glaser 2020; Youn 2020). 

As in 2016, this year has also seen disinformation activity aimed at inflaming racial tensions in the US, directed in part at progressive and Black activists, but also at the white supremacists and neo-Nazis who oppose them in attempts to provoke violence (Collins, Zadrozny, and Saliba 2020; McBride and Stern 2020; Seitz 2020; Collins-Dexter 2020). While much of the popular worry surrounding BLM-centric disinformation in 2016 centered around Russia, as Osei-Opare (2020) explains, fears of foreigners stoking racial tensions in the US play on “old tropes about Black people being politically gullible and vulnerable to foreign meddling.” Rather than embracing unfounded fears of a Kremlin bogeyman, Osei-Opare argues, we must focus on confronting the systemic racism of a society that treats Black lives as expendable. 

Elsewhere on the domestic disinformation front, engagement on Facebook with outlets publishing disinformation disguised as journalism remains high, according to an October 2020 study by the German Marshall Fund, with engagement up 102 percent since the same period in 2016 (Kornbluh, Goldstein, and Weiner 2020). The right wing of the US media spectrum remains saturated with dis- and misinformation and highly biased news, as it did in the 2016 election cycle. Research found that consumers of conservative media were more likely to believe Covid-19 conspiracies, and higher Fox News viewership in an area correlated with fewer people staying home (Ingraham 2020; Jamieson and Albarracin 2020; Simonov et al. 2020). A recent report from the Observatory on Social Media at the University of Indiana found that nearly 80 percent of their survey participants had encountered at least one of five common false narratives, and that a majority of people in their survey believed at least one of those. “More self-identified Republicans and Independents believed all five [false] narratives than Democrats,” the authors found. (OSoMe 2020). Right-wing commentators, amplified by Breitbart and Fox, have stoked baseless fears of leftist coup attempts, leading some observers to fear right-wing violence related to the election (Alba 2020). 

In another evolution, both sides of the political spectrum are paying social media influencers to push messages and express support for their platforms, and are frequently concealing those payments, according to a recent report by Goodwin, Joseff, and Woolley (2020). “This amounts to a new and growing form of ‘inorganic’ information operations—elite-dictated propaganda through trusted social media spokespersons,” they found. 

Another potential tactical evolution is the use of fake or low-quality academic journals to spread false information under the guise of valid research, though it is still too early to tell if this practice is really occurring, and how deliberate it is. The FBI (2020) issued a warning in October, mentioning the 2020 US election in particular, and other social-media watchers have sounded alarms about “weaponized” scientific or pseudoscientific preprints (Stein 2020; see also Gitlin 2020), though again, it is unclear if those warnings are well founded. This is a separate issue from academic journals unintentionally publishing flawed research that makes it through their peer review processes. That would fall under our definition of “misinformation,” and this year has seen numerous retractions related to Covid-19 research, including two very high-profile papers (Soltani and Patini 2020). 

In their struggle to moderate content on their platforms, the major US social media have made some changes spurred by the Covid-19 pandemic and infodemic (Donovan and Wardle 2020), and have taken more aggressive measures to limit conspiratorial, false, and some kinds of political content online. For example, Facebook has taken some actions against a prominent conspiracy theory named after a letter of the alphabet, and Twitter has banned “political” advertising. But both Facebook and Twitter have not yet shown that they are willing to make enough investment in content moderation to maintain healthy information environments—their business models, as with other social media, are predicated on engagement, and engagement so far has meant polarizing and inflammatory content. 

Further, as evidenced by the report from Kornbluh, Goldstein and Weiner, people circulating bogus content on social media is by no means the only problem. Problematic news outlets ranging from outright false fringe web publications to highly distorted professional channels like Fox are major contributors to our highly polarized and toxic information environment. Lastly, as with mainstream media outlets, elites and political operatives continue to play an outsized role in the spread of harmful—and in some cases fatal—information. Recent examples include the pro-Trump teen “troll factory” taken down by Facebook and Twitter (CNetS 2020), the rumors of a leftist coup promulgated by a radio host and other outlets, and Project Veritas’ amplification of false narratives about mail voting. Specific to the Covid-19 pandemic, a Cornell study determined that Trump “was likely the largest driver of the COVID-19 misinformation ‘infodemic’” (Evanega et al. 2020). 

As of this writing, it remains to be seen who will win the 2020 US election. But one thing is clear: Disinformation is not going away. In highly polarized contexts, the benefits to politicians of spouting false information outweigh the risks, and until that changes, they will continue to do so. We’ve seen disinformation aimed at Black Lives Matter activists and the white supremacists who oppose them, showing how disinformation both arises from and inflames structural inequalities. While many Covid-19 dis- and misinformation narratives reflect the ways the pandemic has become politicized, other aspects of the “infodemic” show that commercial and pseudo-scientific dis- and misinformation remain issues of global concern. Vaccine hesitancy, coupled with the inherent complexities of vaccine development, and the difficulty of conveying uncertainty in scientific communication, already looms as a major issue even before a Covid-19 vaccine is released.  

In truth, disinformation has always been with us, though the affordances of social media have renewed its potential. The events of the last year have done nothing to show this is likely to change. The longstanding and complex issues discussed above, compounded by continuing issues with platform governance and a lack of comprehensive mitigation strategies (Kornbluh and Goodman 2020), indicate that it will continue to be a significant societal concern for some time. 

Introduction

One of the promises of the internet has been that anyone with a device and a data connection can find an audience. Until relatively recently, many social scientists and activists joined technologists in their optimism that social media and digital publishing would connect the world, give a voice to marginalized communities, bridge social and cultural divides, and topple authoritarian regimes.

Reality has proven to be far more complicated. Despite that theoretical access to audience, enormous asymmetries and disparities persist in the way information is produced. Elites, authoritarian regimes, and corporations still have greater influence and access to communicative power. Those elites have retained their ability to inject favorable narratives into the massive media conglomerates that still dominate mediascapes in much of the world. At the same time, we have observed the darker side of the internet’s democratic promise of audience. In a variety of global contexts, we see political extremists, hate groups, and misguided health activists recruiting members, vilifying minorities, and spreading misinformation. These online activities can have very real life-or-death consequences offline, including mass shootings, ethnic strife, disease outbreaks, and social upheaval.

Research into mis- and disinformation is extremely difficult, because by its very nature, disinformation attempts to conceal its origins and motivations. In part because of these difficulties, the findings of individual research projects in this area frequently seem contradictory. However, as Ruths (2019) notes, it is important to view these individual findings as describing different elements of the same process of creating and spreading disinformation. The goal of this research review is exactly that—to describe emerging research consensus on the process of producing disinformation online. After an overview of recent research on producers of disinformation, and their commercial and political motivations, we suggest some avenues for future research. As a reminder, MediaWell provisionally defines “disinformation” as a rhetorical strategy that produces and disseminates false or misleading information in a deliberate effort to confuse, influence, harm, mobilize, or demobilize a target audience. (For more on this complex topic, see our literature review on Defining “Disinformation.”)

We outline several points of scholarly consensus about disinformation below. The following are some of the key takeaways:

  • Governments and individuals have produced disinformation throughout history, and in some ways disinformation in mass media is a very old problem. That said, the internet and social media are opening up new possibilities for spreading false or misleading narratives.
  • Individuals and institutions who produce disinformation may do so for either financial or ideological reasons. However, the line between those motivations can be somewhat blurry.
  • We do not know what effects disinformation campaigns actually have on societies and elections. The extent to which a disinformation campaign, on its own, might be able to sway an election is undetermined. However, disinformation campaigns contribute to (and stem from) environments of distrust in institutions, and that distrust has implications for how societies govern themselves.

Disinformation in the internet age

Disinformation has a lengthy and enduring history. For example, a completely fabricated document purporting to describe an international Jewish conspiracy emerged in Russia at the beginning of the twentieth century and has influenced anti-Semitic and conspiratorial rhetoric to this day. At times, sensationalism bordering on fakery has been an important tactic for news outlets seeking to boost circulation—US newspaper magnate William Randolph Hearst is credited with building support for war against Spain in the late 1890s with false and inflammatory reporting (Cull 2003), an example that bridges both political and commercial motivations.

Scholars of media systems generally agree that disinformation is different in the internet age. Despite diversity in the content and producers of disinformation, researchers are beginning to point to emerging patterns. It’s worth mentioning, however, that the internet (or Facebook, or any particular technology) is not the problem in and of itself. As Benkler, Faris, and Roberts (2018) remark, countries with more functional media landscapes than the US, and different cultural ways of making sense of the world, will have different experiences as their mediascapes evolve with these new technologies.

There are a number of factors that contribute to contemporary disinformation environments, and we address a few of them below:

  • Increasing affective polarization may be related to increasing disinformation.
  • The internet facilitates certain kinds of concealment or anonymity.
  • Disinformation producers may exploit changes in journalistic standards and regulatory regimes, as well as varying levels of digital literacy.
  • Disinformation producers can more easily make their messages appear organic and distributed among many sources.

The amount, spread, and prominence of disinformation certainly appear to be trending higher in some contexts, along with increased affective polarization (for more, see our literature review on Contexts of Misinformation) and a rise in extremist discourse. It seems as if there may be some relationships between polarization and dis- and misinformation, though the existence and exact natures of those relationships remain elusive and further research is needed to explore them (Iyengar, Sood, and Lelkes 2012; Iyengar and Westwood 2015; Rogowski and Sutherland 2016; Tucker et al. 2018).

Next, the internet allows certain kinds of concealment. While anonymous or pseudonymous publication has been possible for a long time, the internet allows more people to be anonymous or pretend to be someone else entirely. In itself, this anonymity is not a bad thing. It allows vulnerable people to connect with others and explore their identities, enables political activism in repressive environments, and promotes the exchange of unorthodox ideas. But it also helps the creators of disinformation narratives make it appear that their messages are coming from dozens or hundreds of organic voices instead of a handful of news outlets or government bureaus. Anonymity also facilitates the spread of ideas that may seem bizarre or extreme, like conspiracy theories. Once in the social media ecosystems, these narratives spread further, and a few are then amplified by professional media, politicians, and political action committees (Crosset, Tanner, and Campana 2018; Farrell et al. 2019; Gray 2012; Phillips 2018; Sobieraj 2019; Tucker et al. 2018).

Journalistic standards and journalists’ understandings of professionalism have shaped the contexts in which audience members assess the reliability of information. These journalistic standards and understandings have developed over decades, and have simultaneously enabled news consumers to set their expectations for news and allowed publishers to establish lucrative informational monopolies. These standards have changed over time, and they can vary widely from country to country, but the digital age has ushered in major changes to the ways that news is produced and paid for, and these changes may have implications for how people assess credibility (Donsbach and Klett 1993; Jahng and Littau 2015; Russell 2010, 2011; Usher and Carlson 2018; Waisbord 2013). Similarly, societies have also come to rely on various regulatory institutions to set standards for mass media producers. While some countries have stricter regulations, including overt media censorship, others may rely more on libel laws and courts to enforce standards. In the US, for example, the FCC equal time rule requires that broadcasters using public airwaves must offer equal advertising sales to all candidates for an office (McCraw 2009; see McChesney 1995 for a history of mass media regulation in the US).

Unpacking the roles of journalistic standards and regulations and cultural norms in how people assess news would require its own literature review (if not several books), but the key takeaway is this: journalistic standards, regulations, and cultural norms help shape how we assess information—what we might call our media literacy, or news literacy, or digital literacy. Changes to the news industry and new information technologies have disrupted some (but by no means all) of that background. Producers of disinformation have taken advantage of those disruptions to help propagate their preferred narratives. Some information consumers adapt to new realities more quickly than others, making certain demographics more vulnerable to disinformation tactics. This has been particularly evident in the US among news consumers 65 and older, who are far more likely to spread dis- and misinformation narratives on Facebook (Guess, Nagler, and Tucker 2019; see also Filipec 2019, with similar findings from the Czech Republic about email).

The last crucial difference about disinformation in the digital age is that the origin of these messages is often distributed—or appears to be distributed—among a variety of social media accounts, user reviews, online news outlets, and other online expressions. Disinformation narratives may propagate via individual trolls on platforms like Reddit or 4chan, or simultaneously through connected and coordinated disinformation accounts on other platforms. From there they may bounce to outlets like Drudge, and some more prominent narratives might make their way to mainstream professional media outlets, where false narratives gain exposure and legitimacy even when they are the subjects of reports trying to debunk them (Benkler, Faris, and Roberts 2018; Marwick and Lewis 2017; Phillips 2018). For professional news organizations, it takes money and effort by journalists to expose the operators of  “fake news” sites like the Macedonian teenagers’ network. Individual internet users (and many news outlets) are not capable of investing those resources. The social networks’ automated detection methods are improving, and they have partnered with a variety of fact-checking organizations. But given the relatively minimal costs and risks associated with running a network of bogus, anonymous social media accounts, the scale of the problem is significant.

Who produces disinformation, and why?

A very wide variety of actors produces disinformation, but we can often reduce their motivations to two broad categories: money and power. As we discuss below, much of the disinformation circulating in recent years is motivated by commercial gain or by attempts to consolidate political power, sway elections, and drive wedges between opponents. The same is true of many historical episodes of disinformation. Wardle and Derakhshan (2017) have also proposed “social” and “psychological” motivation categories. The former would involve making connections with an online or offline group, and the latter would involve gaining “prestige or reinforcement.” Trolling would seem to fit in either one, or both, of those categories. However, definitions of trolling and trolls have evolved in the recent history of the internet, and references to Russian or Iranian political trolls and “troll farms,” for example, do not match older understandings of trolling as provocation for entertainment (for more, see our literature review on Defining “Disinformation”).

While we often know little about the actual individuals who create disinformation in text, visuals, and audio media, researchers, intelligence officials, and journalists have been able to link disinformation narratives to the entities that sponsor them. Those have included governments, militaries, and intelligence agencies; political parties; commercial interests and scammers; and advocacy groups and hate groups. Alongside individual trolls, these actors often act together or through one another—not necessarily in a coordinated or conscious fashion, but picking up on one another’s tactics and narratives. One example would be the “Pizzagate” incident. Early forms of a conspiracy theory involving Hillary Clinton and a purported pedophile group initially circulated on extremist social media, fueled by trolls and political extremists. They were then picked up on commercial clickbait sites, spread on Facebook, and amplified by Trump allies. After Russian intelligence agents hacked the Democratic National Committee and released its emails through WikiLeaks, the narrative eventually coalesced on 4chan around the idea that a pedophile ring linked to the Clinton campaign operated out of a Washington, DC, pizza parlor. After an armed man entered the restaurant saying he was investigating the claims, that false narrative became amplified by the mainstream media. This one incident shows the potential interconnections among individual trolls, social media platforms from 4chan to Facebook, clickbait sites, mainstream and fringe media, foreign intelligence agencies, political campaigns, and citizens willing to take extreme actions offline (Buckels, Trapnell, and Paulhus 2014; DiResta et al. 2018; Hwang 2017; Marwick and Lewis 2017; Tucker et al. 2018).

Commercial motivations

Most scholarly attention to disinformation is focused on its political implications, but a lot of disinformation is motivated by profit. However, it can sometimes be extremely difficult or impossible to distinguish between the two. Disinformation narratives that begin as commercial clickbait often become repurposed by politically motivated actors (Dewey 2016). As Bennett and Livingston (2018) note, “When this ‘for-profit’ fake news takes on partisan aspects, as it often does, it may be picked up by social media bots and distributed as part of larger disinformation campaigns.”

In a recent research report from the Philippines, Ong and Cabañes (2019) demonstrate how local conditions of image-based politics, a digitally savvy workforce, and populist resentment have resulted in a spectrum of political to profit-driven disinformation operations. They describe an in-house staff model, an advertising and PR model, and a clickbait model—in addition to state-sponsored disinformation—sometimes operating alongside one another. These kinds of distributed labor arrangements, they argue, offer plausible deniability to the politicians, donors, and strategists who orchestrate disinformation campaigns.

Prominent disinformation producers can find both political and commercial success by selling advertisements or products. For example, right-wing US-based conspiracy theorist Alex Jones sells a wide range of expensive, dubious nutritional supplements through his media properties, and some estimates have put his sales in the tens of millions of dollars (Bennett and Livingston 2018; Brown 2017; Warzel 2017). Moreover, there is growing evidence that commercial disinformation operations, botnets, and troll farms are available for hire by political and ideological movements, further blurring these conceptual lines (Applebaum et al. 2017; Bay and Fredheim 2019; Bastos and Mercea 2019; for more on this difficulty, please see our literature review on Election Interference).

Financial gain is one of the simplest motivations for the production of disinformation. In China, for example, large numbers of individuals known as the “internet water army” work as hidden paid posters to promote or denigrate businesses (Chen et al. 2011). Elsewhere, with even minimal linguistic and cultural knowledge, disinformation producers can reach across national borders and access substantial audiences (Higgins, McIntire, and Dance 2016). One of the best-known recent cases of apparent commercially motivated disinformation was a network of Macedonian youth who produced voluminous quantities of disinformation aimed at supporters of Donald Trump leading up to the 2016 US election. Supported by advertising sales, the sites formed a lucrative flashpoint in the eruption of concern surrounding “fake news,” social media, and mis- and disinformation in US politics (Hwang 2017; Silverman and Alexander 2016). The low costs of distributed or automated digital production—and the minimal costs of creating bogus news content—mean that even small advertising revenues go into the profit column. As Hwang notes, the low overhead of commercial disinformation sites encourages them to copy material from other sites, modifying it only slightly, if at all. Even commercial disinformation with no obvious political motives, such as bogus product reviews on retail websites, still contributes to a confusing and demoralizing media environment where it seems nothing can be trusted.

Political motivations

The motivations to produce disinformation narratives are the same as the motivations to produce many other kinds of political narratives: to secure popular support and mobilize voters, to differentiate in-groups and out-groups, to promote nationalism, to establish who should be considered as an ally, rival, or enemy, and so on. Just as with other forms of media messages, individuals who consume of disinformation are not passive receptors. They take in, interpret, analyze, accept, reject—and sometimes reproduce—disinformation messages in ways that are influenced by their beliefs, education, prior experiences, and social contexts.

Prompted by the Brexit referendum, social divisions in India, the election of Donald Trump, and Russia’s involvement in the Ukraine conflict, a large volume of recent scholarship has focused on the more overtly political applications of disinformation. Disinformation can be employed to mobilize one’s own supporters or demobilize opponents, or merely to sow confusion and obscure political structures and social changes. While those are among the most prominent examples of political disinformation campaigns, a variety of global contexts have experienced surges in extremist and nationalist politics, with accompanying information disorders (Bennett and Livingston 2018; Wardle and Derakhshan 2017).

So what makes political disinformation—lies, slanders, provocations, propaganda—different from other political communication? Disinformation violates ethics and ideals that are important in societies everywhere, but particularly so to democratic ideals. Democratic governance is predicated on the idea that citizens have a right and a duty to be properly informed by responsible media, in order that they can make appropriate decisions in the common good and hold officials to account. (This, at any rate, has been the ideal—see David Karpf’s Expert Reflection for his take on how this myth works in practice.) Such ideals are enshrined in documents like the 1947 report of the Commission on Freedom of the Press (commonly known as the Hutchins Commission), a landmark document that reveals postwar fears about the power of mass media. In this light, concerns over disinformation become concerns about sustaining a core ideal of democratic society.

Russia

Russia has been orchestrating prominent, comprehensive, and sophisticated disinformation campaigns aimed both within its borders and outside them.[1] In light of its historical geopolitical importance and renewed rivalry with the United States, Western scholars have been closely following its tactics and attempting to explain its motivations. In a 2019 report, Martin and Shapiro identified 38 recent or ongoing Russian influence campaigns aimed at 19 countries, most of them in Europe or North America. The Russian aims were diverse and ranged from supporting separatist or extremist movements to undermining relationships between European neighbors. Some of Russia’s campaigns involved influencing elections in one way or another, which we address at greater length here (see also DiResta et al. 2018). Again, this evidence of influence efforts should not be confused with measurable effects on elections. There is no evidence that Russian influence directly affected voting behavior (Benkler 2019). As Sides, Tesler, and Vavreck (2019) point out, reports that describe tens of thousands of bots, or millions of disinformation tweets, suffer from a “denominator problem.” They point out that such Russian content represented “an infinitesimal fraction” of overall social media content during the election. This is not to say that it had no effects—Russian efforts have contributed to an atmosphere of distrust—but as Benkler cautions, we should not give them more credit than they deserve at the risk of further undermining trust in institutions.

Writing from a security perspective, some observers argue that Russia sees itself in a political conflict, with the US and other Western countries trying to reduce its sphere of influence (Karlsen 2016; Franke 2015; Heier 2016; Danish Defense Intelligence Service 2015). In this light, Russia’s disinformation efforts should be considered alongside its invasion of Georgia in 2008, military interventions in Ukraine (2014) and Syria (2015), and more recently its support for the Maduro regime in Venezuela (Toosi 2019). These represent efforts to buffer its borders, expand influence, and protect strategic and economic interests.

Karlsen (2016) and Nimmo (2015) describe Russia’s disinformation apparatus as inherently multifaceted. Broadly speaking, its aims have been to support the state’s political objectives, to gain audience support, and to discredit the West. In this last area, Nimmo argues, Russia employs a four-pronged strategy of “dismiss the critic, distort the facts, distract from the main issue and dismay the audience.” Karlsen argues that the Russian disinformation apparatus is merely an extension of strategies that date back to the Soviets. The tactics being deployed internationally are copied from internal methods for discrediting and demobilizing political opposition.

Approaching from a slightly different perspective, Pomerantsev and Weiss (2014) argue that Russia is using the tenets of liberal democracy against itself. The strategy involves confusing the audience more than convincing them—encouraging people to give up searching for truth rather than convincing them that something is or is not real.

Fedor and Fredheim (2017) continue in that vein, arguing that Russian creators of political messaging see social media platforms as tools of Western domination and are responding to the new affordances of visual culture on platforms such as YouTube. As the authors dive into the work of a Russian media entrepreneur and video creator, they link his work to a much broader strategy of limiting dissent online and managing narratives without obvious censorship or overt repression. Despite those strategic linkages, they show that the actual process of producing content can “be a rather messier and less linear process than the standard model of a top-down ‘Kremlin propaganda machine’ would suggest.” They see Russia’s strategic goal as discrediting systems of liberal democracy while legitimating an authoritarian alternative.

Political disinformation in the United States

Many recent studies of political communication in the United States have been sparked by the rhetorical strategies and disinformation narratives of the Donald Trump campaign and administration. Trump himself may turn out to be an outlier in the study of political disinformation in American politics—it remains to be seen whether his successors embrace his rhetorical strategies, and with what success. McGranahan (2017) argues that Trump’s lying represents an effort not only to rewrite history, but to claim authenticity while denying it to his political rivals and to dehumanize various groups through his populist messaging.

In the United States, scholars have noted a distinct asymmetry in the production of and exposure to disinformation on the political spectrum, with those on the political right more likely to produce and circulate disinformation (Barrett 2019a, 2019b; Benkler, Faris, and Roberts 2018; Bennett and Livingston 2018; Faris et al. 2017; Marwick 2018; Nithyanand, Schaffner, and Gill 2017). This seems to be true across media types and social media platforms, but as Marwick (2018) notes, citing Benkler and Faris, this does not mean that conservatives are more likely to believe misinformation—but it does mean that their information environment is disproportionally filled with dubious narratives and low-quality content. There are also asymmetries within the conservative media landscape, as Bennett and Livingston (2018, 125) argue that “strategic partisan disinformation” is what distinguishes the so-called alt-right from other conservative media. Benkler, Faris, and Roberts (2018) find that on the American left, the mainstream, professional media outlets are “playing a moderating effect on partisan bullshit,”[2] whereas on the right, the mainstream professional outlets actually amplify that content. They trace this asymmetry to long-term shifts in the American political landscape, arguing that the forces commonly blamed for disinformation—including Russia and Facebook—all rely “on the asymmetric partisan ecosystem that has developed over the past four decades. What that means in practice, for Americans, is that solutions that focus purely on short-term causes, like the Facebook algorithm, are unlikely to significantly improve our public discourse” (2018, 21, 98). They argue instead that the best fix for that asymmetry would be to the aspects of professional journalism that serve to amplify propaganda, like the include-both-sides model of objectivity. In other words, the problem is not social media or foreign interference themselves. The problem is a much deeper division that foreign actors take advantage of, and that fuels the emotion-driven social media business model.

In a similar vein, Nithyanand, Schaffner, and Gill (2017) found marked asymmetries in what they call “incivility” and “fake news” between Democratic- and Republican-leaning Reddit pages, with the American right exposed to significantly more of both kinds of content. In their preprint manuscript, the authors said that Republican subreddits were far more likely to be exposed to so-called fake news than any other group, which had not been the case before the 2016 elections. Noting that the two major candidates were the most disliked in modern American history (see also Saad 2016), they situate their findings in a context of increasing partisan polarization (see our literature review on Contexts of Misinformation for more). They found that political discussions had declined in complexity and increased in incivility since the start of the 2016 election cycle, and that “notably, this rise in incivility is overwhelmingly located on Republican (rather than Democratic) subreddits” (2017, 14). While incivility and misinformation are not the same thing, the authors argue that misinformation helps fuel incivility.

However, while the recent history of right-leaning media in the US has resulted in that part of the political spectrum being disproportionally engaged in the production, circulation, and consumption of dis- and misinformation (Barrett 2019a, 2019b; Benkler, Faris, and Roberts 2018; Narayanan et al. 2018), the production and targeting of disinformation is certainly not limited to the political right. A left-leaning group aimed a disinformation campaign[3] at the Alabama special Senate election in 2017, which attempted to persuade conservatives not to vote for scandal-plagued, far-right candidate Roy Moore, who narrowly lost (Timberg et al. 2019). At the same time, Russian trolls mounted an operation in support of Moore’s candidacy (Martin and Shapiro 2019). That said, Bennett and Livingston (2018) argue that while the radical right embraces strategic, partisan disinformation tactics, the radical left is more likely to organize mobilizations such as Occupy Wall Street and the 15-M anti-austerity campaign in Spain. They further argue that the United States is “exceptional in the degree to which disinformation has become fully integrated into national politics” (2018, 130) but that similar patterns of disruption can be seen in many other democracies, a trend they attribute to right-wing movements disenchanted with institutions and governments.

Evidence is growing that disinformation campaigns disproportionately target marginalized and vulnerable groups in “divide-and-conquer” strategies designed to manipulate and disenfranchise. Joseff and Woolley (2019), in the executive summary of a series of case studies from the 2018 midterm elections, describe a spike in foreign and domestic efforts targeting such groups with disinformation and harassment. Their case studies included groups such as gun-owning black women, Jewish Americans, and moderate Republicans. Most of the harassment directed at them was produced by individuals, but bots were deployed to spread disinformation narratives. The authors further described how “adversarial groups are co-opting images, videos, hashtags, and information previously used or generated by social and issue-focused groups—and then repurposing this content in order to camouflage disinformation and harassment campaigns” (2019, 6). Disinformation campaigns frequently relied on longstanding racial and cultural stereotypes and conspiracy narratives to vilify and divide marginalized groups.

Other case studies – Venezuela, India, Myanmar

Coordinated disinformation campaigns have occurred in a variety of global situations, each with its own unique political context. A 2019 report by the Oxford Internet Institute found “evidence of organized social media manipulation campaigns … in 70 countries, up from 48 in 2018 and 28 in 2017,” and 52 of those campaigns involved the creation of disinformation or manipulation of media (Bradshaw and Howard 2019, i, 15). In some areas, disinformation producers have played off longstanding sectarian or ethnic tensions, while in others they have sought to aggravate class divides or existing political schisms. In Venezuela, for example, prior to the Guiadó crisis in the spring of 2019, a 2015 study found that both the regime and its opposition used botnets to amplify their messaging, with the most active accounts in the service of the opposition. Some bots were posing as politicians and political organizations, typically associated with the opposition Voluntad Popular party. Bots were generally used to promote what the authors termed “innocuous” political events rather than spread misinformation (Forelle et al. 2015). A few years later, as political and economic turmoil deepened and the government shut down all but its own allied outlets, news reports said that both sides were filling the vacuum with online disinformation campaigns to obscure events and discredit opponents (Bandeira 2019; Nugent 2019; Toro 2019). Twitter said it had shut down 2,000 accounts, more than half of which it linked to a state-aligned influence effort aimed at domestic audiences (Lima 2019).

In India, the world’s largest democracy, concerns have been growing over online disinformation, political divisions, and sectarian-ethnic violence. India is the largest market for WhatsApp, a messaging service owned by Facebook. Rumors of child kidnappings, cow slaughters, and other inflammatory content have circulated rapidly on the app, and have been blamed for dozens of killings. In response to the violence, the government has pressured WhatsApp to make technological changes (Arun 2019; Bengali 2019; Dixit and Mac 2018; Madrigal 2018). However, the exact circumstances of many of these incidents are difficult to verify, and it is not appropriate to blame them solely on social media, as the history of sectarian and ethnic violence in India is long and complicated—nor do we wish to contribute to a discourse with colonial roots that positions Indian citizens as impressionable and in need of protection from information (Mazzarella 2013).

It is also not clear who produces or spreads these disinformation narratives in India, in part because of WhatsApp’s focus on privacy and opacity—users of the service cannot see where a forwarded message originated. While WhatsApp may facilitate the spread of certain kinds of information, Madrigal (2018) and Arun (2019) argue that WhatsApp itself is not the root of the violence problem, countering media narratives like a BuzzFeed article titled “How WhatsApp Destroyed a Village” (Dixit and Mac 2018). Madrigal points out that WhatsApp rumors in other contexts do not typically lead to communal violence. Similarly, Arun points to mob violence incidents in which social media do not seem to have played a significant role. Arun and other observers have gestured instead to right-wing political groups, the mobilization of hatred, nationalist politics, and failures in government as the true sources of the violence (Arun 2019; Madrigal 2018; Poonam and Bansal 2019; Yashoda 2018).

The Indian government has responded to the violent incidents by shutting down mobile internet service in affected areas, with more than 116 such shutdowns in 2018 alone (Burgess 2018). Some of the faked or manipulated content has distinctly political tones, such as false narratives trying to link Congress party officials to the February 2019 Pulwama suicide attack (Nielsen 2019). As Nielsen shows, however, India’s disinformation problem extends beyond social media, with large numbers of respondents saying they distrust some news media and are concerned about shoddy reporting, partisan content, and spin. Complicating the situation are accusations that the governing Bharatiya Janata Party and Prime Minister Narendra Modi are the main beneficiaries of partisan and sectarian disinformation circulating on social media and through Modi’s own “NaMo” app. Similar accusations have been made about the opposition Congress party (Bansal 2019; Bisen 2019; Madrigal 2018; Patil 2019; Ponniah 2019; Poonam and Bansal 2019).

Social media have played a somewhat clearer role in ethnic violence in Myanmar. Following telecom deregulation in 2013, Facebook quickly became the dominant communications method in the country (Stecklow 2018). Again, social media are certainly not the cause of the violence against the Muslim Rohingya minority—the underlying tensions go back decades in Myanmar’s post-colonial history—but they are being used to incite hatred, spread fear, and encourage violence. In a report calling for genocide charges against Myanmar’s military leaders, UN investigators criticized Facebook for its lethargic reaction in addressing hate speech and incitement to violence on its platform (Nebehay 2018). In Myanmar, some of the producers of disinformation are easily identified, and have included military leaders and the Buddhist monk and nationalist leader Wirathu. Much of the incendiary content on Facebook is hate speech or what Lee (2019) calls “extreme speech,” but observers have documented other rhetorical strategies, such as false rumors of rape by Muslim men and doctored photographs (Barron 2018; Hodal 2013; Lee 2019; Slodkowski 2018; Stecklow 2018; van Klinken and Aung 2017). In late 2018, Facebook removed hundreds of pages, saying that they were engaged in “coordinated inauthentic behavior” and that “seemingly independent news, entertainment, beauty and lifestyle pages were linked to the Myanmar military” (Facebook 2018).

Because of the pace of academic publishing and the challenges of social science research under Myanmar’s regime, most of what we know so far has come from sources like investigative journalism and human rights agency reports. However, a few academic publications have appeared since the uptick in violence in 2017, which began with insurgent attacks on police stations (for research from earlier in the 2010s, see the 2017 special issue of Journal of Contemporary Asia). Lee suggests that state media, such as the newspaper Global New Light of Myanmar, contribute to an information environment that allows and encourages hate speech, signaling the kinds of online messages that the regime finds acceptable. Lee further argues that state media messaging obscures atrocities, ignores the history of official discrimination against Rohingya, and “encourages readers to believe the country is under siege from Muslims” (2019, 3214). Finally, Kyaw (2019) argues in a commentary that the government has focused on “fake news” as a problem because it deflects blame away from the ongoing anti-Rohingya hate speech problem, an observation that, if true, suggests that the problem of disinformation in Myanmar’s mediasphere has itself become a meta-disinformation narrative.

The research from Venezuela, India, and Myanmar demonstrates that while social media can form a node of disinformation production, that node operates alongside problematic mass media (or mass media that are perceived to be problematic). Those media, at least in some cases, are aligned with governments, parties, and entrenched power structures. Untangling these interactions between social or “new” media and traditional mass media is crucial to understanding how disinformation campaigns function. These alignments provide further evidence that disinformation is not solely an online problem, or new media problem, but one connected to longstanding disparities in media access between political elites and those lower on the socioeconomic ladder.

Conclusion

Disinformation campaigns vary with their specific social, cultural, and geographic contexts, but certain trends are emerging. While disinformation seems to have strong correlations with nationalist politics and authoritarian states, it can be produced in established democracies by groups disaffected by institutional governance or fearful of losing their traditional advantages. While disinformation can be generated by or aimed at leftists, at least for now it seems to be disproportionately prevalent on the political right in Europe and the United States—though as Ruths (2019) argues, it may be that left-wing audiences assimilate and spread misinformation in ways that we do not observe as consistently. Disinformation can mobilize or demobilize a target population, but it can also be effectively employed to merely confuse or distract (Karlsen 2016; King, Pan, and Roberts 2017; Tucker et al. 2018).

In light of these and other developments, scholars such as Robert W. McChesney (2013), Safiya Noble (2018), and Siva Vaidhyanathan (2012; 2018) are asking that societies reconsider their relationships to new technologies and the corporate institutions that control them. Mejias and Vokuev (2017), building on the work of McChesney and others, argue that the internet is becoming an inherently conservative technology, one that reduces individual agency, demobilizes and surveils civil society, and bolsters entrenched power structures. While social media and other internet technologies have facilitated change and revolution in countries with governments unprepared to respond, both autocratic and democratic states can harness their surveillance power and disrupt democratic and opposition movements through disinformation—and all as the tech platforms continue to profit.

Avenues for future study

In much of the research discussed above, the actual producer of disinformation is missing. We tend to know much more about the institutions that sponsor disinformation producers, and the platforms and networks on which they spread their narratives. In most cases, the identities and motivations of the real people sitting at computers or tapping on phones are unknown. Some studies (e.g., Phillips 2011, 2012; Fedor and Fredheim 2017) have shed light on these individuals with more ethnographic approaches. Either social scientists have largely left this area of research to investigative journalists, or their research has not yet reached publication, with exceptions like recent work by Ong and Cabañes (2019).

One potentially rich vein of research examines the involvement (or noninvolvement) of elites in shaping media discourse, extending a chain of inquiry that traces back to Powdermaker (1950) and Herman and Chomsky (1988). This research has contributed to the idea that people in positions of power in industry, media, and politics have disproportionate effects on what ideas are considered important, and the ways they are discussed in societies. For example, Brulle, Carmichael, and Jenkins (2012) suggest that disagreement among American political elites on climate change has contributed to polarization of the issue and to decreasing levels of public concern following an increase in 2006 and 2007. On the other hand, Oliver and Wood (2014) present us with a paradox about elite opinions: Many scholars argue that elites are the drivers of public opinion. But if that is the case, Oliver and Wood ask, why do so many Americans believe in conspiracy theories that run counter to mainstream explanations and demonstrate suspicion of political elites and their motives? Research by Starbird (2017) also demonstrates the complexity of conspiracy narratives, which she finds are often tied to a strong political agenda and are frequently antiglobalist and critical of Western governments, but do not align with the mainstream left-right divide in US politics. Consideration of these questions becomes more complicated in different national contexts. For example, Bennett and Livingston (2018) show that right-wing information networks operate differently in different countries and have varying relationships to their political structures.

Other related avenues of study might involve the relationships between mis- and disinformation, populist politics, and epistemic authority. For example, Ylä-Anttila (2018) argues that rather than rely on “common sense” or experiential knowledge, right-wing populists in Finland employ “counterknowledge” supported by their own alternative inquiries and authorities to counter mainstream policy.

Our grateful acknowledgement to Sarah Oates, Jacob N. Shapiro, and Kris-Stella Trump for their feedback during the writing process for this research review.

[1] Other nations, including the United States, aim disinformation at their rivals, interfere in foreign elections, and mislead their citizens through the media. However, experts have argued that Soviet and neo-Soviet concepts of the mass media are very different from elsewhere, and that those concepts position the mass media as tools to maintain and project the power of the state at home and abroad (Oates 2007, Schramm 1984 [1956]).

[2] The authors cite a definition of bullshit developed by Harry Frankfurt, “which covers communicating with no regard to the truth or falsehood of the statements made” (Benkler, Faris, and Roberts 2018, 32)

[3] This effort was funded in part by Reid Hoffman, who is also a funder of MediaWell. In an apology, Hoffman said he had no knowledge of how his money had been used (Romm, Timberg, and Davis 2018).

Works Cited

Alba, Davey. 2020. “Riled Up: Misinformation Stokes Calls for Violence on Election Day.” New York Times, October 13, 2020. https://www.nytimes.com/2020/10/13/technology/viral-misinformation-violence-election.html.

Applebaum, Anne, Peter Pomerantsev, Melanie Smith, and Chloe Colliver. 2017. “‘Make Germany Great Again’ – Kremlin, Alt-Right and International Influences in the 2017 German Elections.” Institute for Strategic Dialogue.

Arun, Chinmayi. 2019. “On WhatsApp, Rumours, and Lynchings.” Economic and Political Weekly 54 (6): 7–8.

Bandeira, Luiza. 2019. “Civilian Militias in Venezuela Coordinate on Twitter.” Medium (blog). May 1, 2019. https://medium.com/dfrlab/civilian-militias-in-venezuela-coordinate-on-twitter-aadcd86d6186.

Bansal, Samarth. 2019. “Narendra Modi App Has a Fake News Problem.” DisFact (blog). January 27, 2019. https://medium.com/disfact/narendra-modi-app-has-a-fake-news-problem-d60b514bb8f1.

Barrett, Paul M. 2019a. “Tackling Domestic Disinformation: What the Social Media Companies Need to Do.” NYU Stern Center for Business and Human Rights. http://www.nyu.edu/content/nyu/en/about/news-publications/news/2019/april/tackling-domestic-disinformation–what-the-social-media-companie.

———. 2019b. Disinformation and the 2020 Election: How the Social Media Industry Should Prepare. NYU Stern Center For Business and Human Rights Report. New York, NY: NYU Stern. https://issuu.com/nyusterncenterforbusinessandhumanri/docs/nyu_election_2020_report/1.

Barron, Laignee. 2018. “‘Burmese Bin Laden’ Stopped From Spreading Hate on Facebook.” Time, February 28, 2018. https://time.com/5178790/facebook-removes-wirathu/.

Bastos, Marco T., and Dan Mercea. 2019. “The Brexit Botnet and User-Generated Hyperpartisan News.” Social Science Computer Review 37 (1): 38–54. https://doi.org/10.1177/0894439317734157.

Bay, Sebastian, and Rolf Fredheim. 2019. How Social Media Companies Are Failing to Combat Inauthentic Behaviour Online. Riga, Latvia: NATO STRATCOM Centre of Excellence. https://www.stratcomcoe.org/how-social-media-companies-are-failing-combat-inauthentic-behaviour-online.

Bengali, Shashank. 2019. “How WhatsApp Is Battling Misinformation in India, Where ‘Fake News Is Part of Our Culture.’” Los Angeles Times, February 4, 2019, sec. World & Nation. https://www.latimes.com/world/la-fg-india-whatsapp-2019-story.html.

Bengani, Priyanjana. 2019. “Hundreds of ‘Pink Slime’ Local News Outlets Are Distributing Algorithmic Stories and Conservative Talking Points.” Columbia Journalism Review, December 18, 2019. https://www.cjr.org/tow_center_reports/hundreds-of-pink-slime-local-news-outlets-are-distributing-algorithmic-stories-conservative-talking-points.php/.

———. 2020. “As Election Looms, a Network of Mysterious ‘Pink Slime’ Local News Outlets Nearly Triples in Size.” Columbia Journalism Review, August 4, 2020. https://www.cjr.org/analysis/as-election-looms-a-network-of-mysterious-pink-slime-local-news-outlets-nearly-triples-in-size.php.

Benkler, Yochai. 2019. “Cautionary Notes on Disinformation and the Origins of Distrust.” MediaWell, Social Science Research Council. October 17, 2019. https://mediawell.ssrc.org/expert-reflections/cautionary-notes-on-disinformation-benkler/.

Benkler, Yochai, Robert Faris, and Hal Roberts. 2018. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford University Press.

Benkler, Yochai, Casey Tilton, Bruce Etling, Hal Roberts, Justin Clark, Robert Faris, Jonas Kaiser, and Carolyn Schmitt. 2020. “Mail-in Voter Fraud: Anatomy of a Disinformation Campaign.” Berkman Klein Center for Internet & Society. http://wilkins.law.harvard.edu/publications/Benkler-etal-Mail-in-Voter-Fraud-Anatomy-of-a-Disinformation-Campaign.pdf.

Bennett, W Lance, and Steven Livingston. 2018. “The Disinformation Order: Disruptive Communication and the Decline of Democratic Institutions.” European Journal of Communication 33 (2): 122–39. https://doi.org/10.1177/0267323118760317.

Bisen, Arjun. 2019. “Disinformation Is Drowning Democracy.” Foreign Policy (blog). April 24, 2019. https://foreignpolicy.com/2019/04/24/disinformation-is-drowning-democracy/.

Bradshaw, Samantha, and Philip N. Howard. 2019. The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation. The Computational Propaganda Project. https://comprop.oii.ox.ac.uk/research/cybertroops2019/.

Brown, Seth. 2017. “Alex Jones’s Infowars Media Empire Is Built to Sell Snake-Oil Diet Supplements.” Intelligencer, May 4, 2017. http://nymag.com/intelligencer/2017/05/how-does-alex-jones-make-money.html.

Brulle, Robert J., Jason Carmichael, and J. Craig Jenkins. 2012. “Shifting Public Opinion on Climate Change: An Empirical Assessment of Factors Influencing Concern over Climate Change in the U.S., 2002-2010.” Climatic Change; Dordrecht114 (2): 169–88. http://dx.doi.org.libproxy.temple.edu/10.1007/s10584-012-0403-y.

Buckels, Erin E., Paul D. Trapnell, and Delroy L. Paulhus. 2014. “Trolls Just Want to Have Fun.” Personality and Individual Differences, The Dark Triad of Personality, 67 (September): 97–102. https://doi.org/10.1016/j.paid.2014.01.016.

Burgess, Matt. 2018. “To Fight Fake News on WhatsApp, India Is Turning off the Internet.” Wired UK, October 18, 2018. https://www.wired.co.uk/article/whatsapp-web-internet-shutdown-india-turn-off.

Chen, Cheng, Kui Wu, Venkatesh Srinivasan, and Xudong Zhang. 2011. “Battling the Internet Water Army: Detection of Hidden Paid Posters.” ArXiv, November. http://arxiv.org/abs/1111.4297.

CNetS. 2020. “Evidence of a Coordinated Network Amplifying Inauthentic Narratives in the 2020 US Election.” CNetS (blog). September 25, 2020. https://cnets.indiana.edu/blog/2020/09/25/evidence-of-a-coordinated-network-amplifying-inauthentic-narratives-in-the-2020-us-election/.

Collins, Ben, Brandy Zadrozny, and Emmanuelle Saliba. 2020. “White Nationalist Group Posing as Antifa Called for Violence on Twitter.” NBC News. June 1, 2020. https://www.nbcnews.com/tech/security/twitter-takes-down-washington-protest-disinformation-bot-behavior-n1221456.

Collins-Dexter, Brandi. 2020. “Butterfly Attack: Operation Blaxit.” Media Manipulation Casebook. October 16, 2020. https://mediamanipulation.org/case-studies/butterfly-attack-operation-blaxit.

Commission on Freedom of the Press (Hutchins Commission). 1947. A Free And Responsible Press. University Of Chicago Press. http://archive.org/details/freeandresponsib029216mbp.

Crosset, Valentine, Samuel Tanner, and Aurélie Campana. 2018. “Researching Far Right Groups on Twitter: Methodological Challenges 2.0.” New Media & Society, December. https://doi.org/10.1177/1461444818817306.

Cull, Nicholas J. 2003. “Spanish-American War.” In Propaganda and Mass Persuasion: A Historical Encyclopedia, 1500 to the Present. Credo Reference. https://search-credoreference-com.libproxy.temple.edu/content/entry/abcprop/spanish_american_war/0.

Danish Defense Intelligence Service. 2015. “Intelligence Risk Assessment 2015.” Danish Defense Intelligence Service.

Dewey, Caitlin. 2016. “Facebook Fake-News Writer: ‘I Think Donald Trump Is in the White House Because of Me.’” Washington Post, November 17, 2016, sec. Internet Culture. https://www.washingtonpost.com/news/the-intersect/wp/2016/11/17/facebook-fake-news-writer-i-think-donald-trump-is-in-the-white-house-because-of-me/.

DiResta, Renee, Kris Shaffer, Becky Ruppel, and David Sullivan. 2018. “The Tactics & Tropes of the Internet Research Agency.” New Knowledge, December 17, 2018. https://www.newknowledge.com/articles/the-disinformation-report/.

Dixit, Pranav, and Ryan Mac. 2018. “Vicious Rumors Spread Like Wildfire on WhatsApp — and Destroyed a Village.”BuzzFeed News. September 9, 2018. https://www.buzzfeednews.com/article/pranavdixit/whatsapp-destroyed-village-lynchings-rainpada-india.

DNI. 2020a. “Statement by NCSC Director William Evanina: 100 Days Until Election 2020.” Office of the Director of National Intelligence. https://www.dni.gov/index.php/newsroom/press-releases/item/2135-statement-by-ncsc-director-william-evanina-100-days-until-election-2020.

———. 2020b. “Statement by NCSC Director William Evanina: Election Threat Update for the American Public.” Office of the Director of National Intelligence. https://www.dni.gov/index.php/newsroom/press-releases/item/2139-statement-by-ncsc-director-william-evanina-election-threat-update-for-the-american-public.

Donovan, Joan, and Claire Wardle. 2020. “Misinformation Is Everybody’s Problem Now.” Items – Social Science Research Council (blog). https://items.ssrc.org/covid-19-and-the-social-sciences/mediated-crisis/misinformation-is-everybodys-problem-now/.

Donsbach, Wolfgang, and Bettina Klett. 1993. “Subjective Objectivity. How Journalists in Four Countries Define a Key Term of Their Profession.” International Communication Gazette. https://journals.sagepub.com/doi/10.1177/001654929305100104.

Evanega, Sarah, Mark Lynas, Jordan Adams, and Karinne Smolenyak. 2020. “Quantifying Sources and Themes in the COVID-19 ‘Infodemic.’” Cornell Alliance for Science.

Facebook. 2018. “Removing Myanmar Military Officials from Facebook.” About Facebook (blog). August 28, 2018. https://about.fb.com/news/2018/08/removing-myanmar-officials/.

Faris, Robert M., Hal Roberts, Bruce Etling, Nikki Bourassa, Ethan Zuckerman, and Yochai Benkler. 2017. “Partisanship, Propaganda, and Disinformation: Online Media and the 2016 U.S. Presidential Election.” Berkman Klein Center for Internet & Society Research Paper. Berkman Klein Center for Internet & Society, Harvard University. https://dash.harvard.edu/handle/1/33759251.

Farrell, Tracie, Miriam Fernandez, Jakub Novotny, and Harith Alani. 2019. “Exploring Misogyny across the Manosphere in Reddit.” In WebSci ’19 Proceedings of the 10th ACM Conference on Web Science, 87–96. Boston. http://oro.open.ac.uk/61128/.

FBI. 2020. “Foreign Actors Likely to Use Online Journals to Spread Disinformation Regarding 2020 Elections.” 2020. https://www.ic3.gov/Media/Y2020/PSA201001.

Fedor, Julie, and Rolf Fredheim. 2017. “‘We Need More Clips about Putin, and Lots of Them:’ Russia’s State-Commissioned Online Visual Culture.” Nationalities Papers 45 (2): 161–81. https://doi.org/10.1080/00905992.2016.1266608.

Filipec, Ondrej. 2019. “Towards a Disinformation Resilient Society?: The Experience of the Czech Republic.” Cosmopolitan Civil Societies: An Interdisciplinary Journal 11 (March). https://search.informit.org/documentSummary;dn=327461541708193;res=IELHSS.

Forelle, Michelle, Phil Howard, Andrés Monroy-Hernández, and Saiph Savage. 2015. “Political Bots and the Manipulation of Public Opinion in Venezuela.” SSRN, July. https://ssrn.com/abstract=2635800.

Franke, Ulrik. 2015. “War by Non-Military Means: Understanding Russian Information Warfare.” Totalförsvarets Forskningsinstitut (FOI), March. http://dataspace.princeton.edu/jspui/handle/88435/dsp019c67wq22q.

Garcia-Camargo, Isabella, Alex Stamos, Elena Cryst, Joe Bak-Coleman, Kate Starbird, and Joey Schafer. 2020. “Project Veritas #BallotHarvesting Amplification.” Election Integrity Partnership. https://www.eipartnership.net/rapid-response/project-veritas-ballotharvesting.

Gitlin, Jonathan M. 2020. “The Preprint Problem: Unvetted Science Is Fueling COVID-19 Misinformation.” Ars Technica. May 6, 2020. https://arstechnica.com/science/2020/05/a-lot-of-covid-19-papers-havent-been-peer-reviewed-reader-beware/.

Glaser, April. 2020. “With Days Left, Black Voters Face Orchestrated Efforts to Discourage Voting.” NBC News. October 16, 2020. https://www.nbcnews.com/tech/social-media/days-left-black-voters-face-orchestrated-efforts-discourage-voting-n1243780.

Goldman, Adam, Julian E. Barnes, Maggie Haberman, and Nicholas Fandos. 2020. “Lawmakers Are Warned That Russia Is Meddling to Re-Elect Trump.” New York Times, September 22, 2020. https://www.nytimes.com/2020/02/20/us/politics/russian-interference-trump-democrats.html.

Goodwin, Anastasia, Katie Joseff, and Samuel C. Woolley. 2020. “Social Media Influencers and the 2020 U.S. Election: Paying ‘Regular People’ for Digital Campaign Communication.” Accessed October 14, 2020. https://mediaengagement.org/research/social-media-influencers-and-the-2020-election/.

Gray, Kishonna L. 2012. “Intersecting Oppressions and Online Communities.” Information, Communication & Society 15 (3): 411–28. https://doi.org/10.1080/1369118X.2011.642401.

Guess, Andrew, Jonathan Nagler, and Joshua Tucker. 2019. “Less than You Think: Prevalence and Predictors of Fake News Dissemination on Facebook.” Science Advances 5 (1). https://doi.org/10.1126/sciadv.aau4586.

Heier, Tormod. 2016. “The Logic of Asymmetry: Russia’s Approach Towards NATO.” In Ukraine and Beyond: Russia’s Strategic Security Challenge to Europe, edited by Janne Haaland Matlary and Tormod Heier. Springer.

Herman, Edward S., and Noam Chomsky. 1988. Manufacturing Consent: The Political Economy of the Mass Media. Pantheon Books.

Higgins, Andrew, Mike McIntire, and Gabriel J.x. Dance. 2017. “Inside a Fake News Sausage Factory: ‘This Is All About Income.’” New York Times, December 22, 2017, sec. World. https://www.nytimes.com/2016/11/25/world/europe/fake-news-donald-trump-hillary-clinton-georgia.html.

Hodal, Kate. 2013. “Buddhist Monk Uses Racism and Rumours to Spread Hatred in Burma.” The Guardian, April 18, 2013, sec. World News. https://www.theguardian.com/world/2013/apr/18/buddhist-monk-spreads-hatred-burma.

Hwang, Tim. 2017. “Digital Disinformation: A Primer.” Atlantic Council. https://www.atlanticcouncil.org/publications/articles/digital-disinformation-a-primer.

Ingraham, Christopher. 2020. “New Research Explores How Conservative Media Misinformation May Have Intensified the Severity of the Pandemic.” Washington Post. Accessed October 23, 2020. https://www.washingtonpost.com/business/2020/06/25/fox-news-hannity-coronavirus-misinformation/.

Iyengar, Shanto, Gaurav Sood, and Yphtach Lelkes. 2012. “Affect, Not Ideology: A Social Identity Perspective on Polarization.” Public Opinion Quarterly 76 (3): 405–31. https://doi.org/10.1093/poq/nfs038.

Iyengar, Shanto, and Sean J. Westwood. 2015. “Fear and Loathing across Party Lines: New Evidence on Group Polarization.” American Journal of Political Science 59 (3): 690–707. https://doi.org/10.1111/ajps.12152.

Jahng, Mi Rosie, and Jeremy Littau. 2015. “Interacting Is Believing: Interactivity, Social Cue, and Perceptions of Journalistic Credibility on Twitter.” Journalism & Mass Communication Quarterly 93 (1): 38–58. https://doi.org/10.1177/1077699015606680.

Jamieson, Kathleen Hall, and Dolores Albarracín. 2020. “The Relation between Media Consumption and Misinformation at the Outset of the SARS-CoV-2 Pandemic in the US.” Harvard Kennedy School Misinformation Review, no. 2 (April). https://doi.org/10.37016/mr-2020-012.

Joseff, Katie, and Samuel Woolley. 2019. “The Human Consequences of Computational Propaganda: Eight Case Studies from the 2018 US Midterm Elections.” Institute for the Future.

Karlsen, Geir Hågen. 2016. “Tools of Russian Influence: Information and Propaganda.” In Ukraine and Beyond: Russia’s Strategic Security Challenge to Europe, edited by Janne Haaland Matlary and Tormod Heier, 181–208. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-32530-9_9.

Karpf, David. 2019. On Digital Disinformation and Democratic Myths. Social Science Research Council, MediaWell. https://mediawell.ssrc.org/expert-reflections/on-digital-disinformation-and-democratic-myths/. DOI: 10.35650/MD.2012.d.2019.

Kennedy, Ian, Andrew Beers, Kolina Koltai, Morgan Wack, Joey Schafer, Paul Lockaby, Michael Caulfield, et al. 2020. “Emerging Narratives Around ‘Mail Dumping’ and Election Integrity.” Election Integrity Partnership. https://www.eipartnership.net/rapid-response/mail-dumping.

Kim, Young Mie. 2020. “New Evidence Shows How Russia’s Election Interference Has Gotten More Brazen | Brennan Center for Justice.” Brennan Center for Justice at NYU. https://www.brennancenter.org/our-work/analysis-opinion/new-evidence-shows-how-russias-election-interference-has-gotten-more.

King, Gary, Jennifer Pan, and Margaret E. Roberts. 2017. “How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, Not Engaged Argument.” American Political Science Review 111 (3): 484–501.

Kirby, Jen. 2020. “Are China and Iran Meddling in US Elections? It’s Complicated.” Vox. September 15, 2020. https://www.vox.com/21418513/china-iran-us-election-meddling-russia.

Kornbluh, Karen, Adrienne Goldstein, and Eli Weiner. 2020. “New Study by Digital New Deal Finds Engagement with Deceptive Outlets Higher on Facebook Today Than Run-up to 2016 Election.” German Marshall Fund of the United States. https://www.gmfus.org/blog/2020/10/12/new-study-digital-new-deal-finds-engagement-deceptive-outlets-higher-facebook-today.

Kyaw, Nyi Nyi. 2019. “Facebooking in Myanmar: From Hate Speech to Fake News to Partisan Political Communication.” ISEAS-Yusof Ishak Institute. https://think-asia.org/handle/11540/10254.

Lee, Ronan. 2019. “Extreme Speech in Myanmar: The Role of State Media in the Rohingya Forced Migration Crisis.” International Journal of Communication 13 (July): 3203–3224.

Lim, Gabrielle. 2020. “The Risks of Exaggerating Foreign Influence Operations and Disinformation.” Center for International Governance Innovation. https://www.cigionline.org/articles/risks-exaggerating-foreign-influence-operations-and-disinformation.

Lima, Christiano. 2019. “Facebook, Twitter Take down Disinformation Campaigns Linked to Iran, Russia, Venezuela.” POLITICO, April 6, 2019. https://www.politico.com/story/2019/01/31/facebook-twitter-disinformation-iran-2608393.

Madrigal, Alexis C. 2018. “India’s Lynching Epidemic and the Problem With Blaming Tech.” The Atlantic, September 25, 2018. https://www.theatlantic.com/technology/archive/2018/09/whatsapp/571276/.

Martin, Diego A., and Jacob N. Shapiro. 2019. “Trends in Online Foreign Influence Efforts | Empirical Studies of Conflict.” ESOC Publications. Princeton University. https://esoc.princeton.edu/files/trends-online-foreign-influence-efforts.

Martin, Diego A., Jacob N. Shapiro, and Julia G. Ilhardt. 2020. “Trends in Online Influence Efforts Version 2.” ESOC Publications. Empirical Studies of Conflict, Princeton University. https://esoc.princeton.edu/publications/trends-online-influence-efforts.

Marwick, Alice E. 2018. “Why Do People Share Fake News? A Sociotechnical Model of Media Effects.” Georgetown Law Technology Review 474.

Marwick, Alice, and Rebecca Lewis. 2017. Media Manipulation and Disinformation Online. New York: Data & Society Research Institute.

Matlary, Janne Haaland, and Tormod Heier. 2016. Ukraine and Beyond: Russia’s Strategic Security Challenge to Europe. Springer.

Mazzarella, William. 2013. Censorium: Cinema and the Open Edge of Mass Publicity. Duke University Press.

McBride, Megan K., and Jessica Stern. 2020. “The Flood of Online Misinformation Around the George Floyd Protests.” Lawfare (blog). June 22, 2020. https://www.lawfareblog.com/flood-online-misinformation-around-george-floyd-protests.

McChesney, Robert W. 1995. Telecommunications, Mass Media, and Democracy: The Battle for the Control of U.S. Broadcasting, 1928-1935. Oxford University Press.

———. 2013. Digital Disconnect: How Capitalism Is Turning the Internet Against Democracy. New York: The New Press.

McCraw, Shannon K. n.d. “Equal Time Rule.” The First Amendment Encyclopedia, Middle Tennessee State University. Accessed December 3, 2019. https://www.mtsu.edu/first-amendment/article/949/equal-time-rule.

McGranahan, Carole. 2017. “An Anthropology of Lying: Trump and the Political Sociality of Moral Outrage.” American Ethnologist 44 (2): 243–48. https://doi.org/10.1111/amet.12475.

McLuhan, Marshall, and Lewis H. Lapham. 1996. Understanding Media: The Extensions of Man. Reprint edition. Cambridge, MA: The MIT Press.

Mejias, Ulises A, and Nikolai E Vokuev. 2017. “Disinformation and the Media: The Case of Russia and Ukraine.” Media, Culture & Society 39 (7): 1027–42. https://doi.org/10.1177/0163443716686672.

Narayanan, Vidya, Vlad Barash, John Kelly, Bence Kollanyi, Lisa-Maria Neudert, and Philip N. Howard. 2018. “Polarization, Partisanship and Junk News Consumption over Social Media in the US.” Oxford Internet Institute. http://arxiv.org/abs/1803.01845v1.

Nebehay, Stephanie. 2019. “U.N. Calls for Myanmar Generals to Be Tried for Genocide, Blames Facebook for Incitement.” Reuters, August 27, 2019. https://www.reuters.com/article/us-myanmar-rohingya-un/u-n-calls-for-myanmar-generals-to-be-tried-for-genocide-blames-facebook-for-incitement-idUSKCN1LC0KN.

Nielsen, Rasmus Kleis. 2019. “Disinformation Is Everywhere in India.” The Hindu, March 25, 2019. https://www.thehindu.com/opinion/op-ed/disinformation-is-everywhere-in-india/article26626745.ece.

Nimmo, Ben. 2015. “Anatomy of an Info-War: How Russia’s Propaganda Machine Works, and How to Counter It.” StopFake.org. May 9, 2015. https://www.stopfake.org/en/anatomy-of-an-info-war-how-russia-s-propaganda-machine-works-and-how-to-counter-it/.

Nithyanand, Rishab, Brian Schaffner, and Phillipa Gill. 2017. “Online Political Discourse in the Trump Era.” ArXiv:1711.05303 [Cs], November. http://arxiv.org/abs/1711.05303.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.

Nugent, Ciara. 2019. “Inside the Battle to Get News to Venezuelans.” Time, April 6, 2019. https://time.com/5571504/venezuela-internet-press-freedom/.

Oates, Sarah. 2007. “The Neo-Soviet Model of the Media.” Europe-Asia Studies 59 (8): 1279–97.

Oliver, J. Eric, and Thomas J. Wood. 2014. “Conspiracy Theories and the Paranoid Style(s) of Mass Opinion.” American Journal of Political Science 58 (4): 952–66. https://doi.org/10.1111/ajps.12084.

Ong, Jonathan Corpus, and Jason Vincent A. Cabañes. 2019. “Four Work Models of Political Trolling in the Philippines | StratCom.” NATO STRATCOM Centre of Excellence. https://www.stratcomcoe.org/four-work-models-political-trolling-philippines.
Osei-Opare, Nana. 2020. “When It Comes to America’s Race Issues, Russia Is a Bogeyman.” Foreign Policy (blog). July 6, 2020. https://foreignpolicy.com/2020/07/06/when-it-comes-to-americas-race-issues-russia-is-a-bogeyman/.

OSoMe. 2020. “Tracking Public Opinion about Unsupported Narratives in the 2020 Presidential Election.” Indiana University-Bloomington: Observatory on Social Media (OSoMe). https://mediaschool.indiana.edu/research/reports/wave-1.html.

Patil, Samir. 2019. “India Has a Public Health Crisis. It’s Called Fake News.” The New York Times, May 3, 2019, sec. Opinion. https://www.nytimes.com/2019/04/29/opinion/india-elections-disinformation.html.

Phillips, Whitney. 2011. “LOLing at Tragedy: Facebook Trolls, Memorial Pages and Resistance to Grief Online.” First Monday 16 (12). https://doi.org/10.5210/fm.v16i12.3168.

———. 2012. “This Is Why We Can’t Have Nice Things: The Origins, Evolution and Cultural Embeddedness of Online Trolling.” Proquest LLC. https://search.proquest.com/openview/b2a7a4c485c5d4afef6965aca6d2362c/1?pq-origsite=gscholar&cbl=18750&diss=y.

———. 2018. The Oxygen of Amplification. New York: Data & Society Research Institute. https://datasociety.net/output/oxygen-of-amplification/.

Pomerantsev, Peter, and Michael Weiss. 2014. “How the Kremlin Weaponizes Information, Culture and Money.” The Interpreter, November 22, 2014.

Ponniah, Kevin. 2019. “WhatsApp: The ‘Black Hole’ of Fake News in India’s Election.” BBC News, April 6, 2019. https://www.bbc.com/news/world-asia-india-47797151.

Poonam, Snigdha, and Samarth Bansal. 2019. “Misinformation Is Endangering India’s Election.” The Atlantic, April 1, 2019. https://www.theatlantic.com/international/archive/2019/04/india-misinformation-election-fake-news/586123/.

Powdermaker, Hortense. 1950. Hollywood the Dream Factory. An Anthropologist Looks at the Movie-Makers. 1st edition. London: Secker & Warburg.

Rogowski, Jon C., and Joseph L. Sutherland. 2016. “How Ideology Fuels Affective Polarization.” Political Behavior 38 (2): 485–508. https://doi.org/10.1007/s11109-015-9323-7.

Romm, Tony, Craig Timberg, and Aaron C. Davis. 2018. “Internet Billionaire Reid Hoffman Apologizes for Funding a Group Tied to Disinformation in Alabama Race.” Washington Post, December 26, 2018. https://www.washingtonpost.com/technology/2018/12/26/internet-billionaire-reid-hoffman-apologizes-funding-group-behind-disinformation-alabama-race/.

Russell, Adrienne. 2010. “Salon.Com and New-Media Professional Journalism Culture.” In The Anthropology of News & Journalism: Global Perspectives, edited by S. Elizabeth Bird. Indiana University Press.

———. 2011. Networked: A Contemporary History of News in Transition. Cambridge, UK: Polity.

Ruths, Derek. 2019. “The Misinformation Machine.” Science 363 (6425): 348–348. https://doi.org/10.1126/science.aaw1315.

Saad, Lydia. 2016. “Trump and Clinton Finish with Historically Poor Images.” Gallup. November 8, 2016. https://news.gallup.com/poll/197231/trump-clinton-finish-historically-poor-images.aspx.

Sanger, David E., and Julian E. Barnes. 2020. “U.S. Warns Russia, China and Iran Are Trying to Interfere in the Election. Democrats Say It’s Far Worse.” The New York Times, August 18, 2020, sec. U.S. https://www.nytimes.com/2020/07/24/us/politics/election-interference-russia-china-iran.html.

Schramm, Wilbur. 1984. “The Soviet Communist Theory of the Press.” In Four Theories of the Press: The Authoritarian, Libertarian, Social Responsibility, and Soviet Communist Concepts of What the Press Should Be and Do, edited by Fred S. Siebert, Theodore Peterson, and Wilbur Schramm, 105–46. University of Illinois Press. https://doi.org/10.5406/j.ctv1nhr0v.7.

Seitz, Amanda. 2020. “Facebook Groups Pivot to Attacks on Black Lives Matter.” AP News, July 5, 2020. https://apnews.com/article/ca8c15794c65b1ae8e176deb9be5d718.

Sides, John, Michael Tesler, and Lynn Vavreck. 2019. Identity Crisis: The 2016 Presidential Campaign and the Battle for the Meaning of America. Princeton University Press.

Silverman, Craig, and Lawrence Alexander. 2016. “How Teens in the Balkans Are Duping Trump Supporters with Fake News.” Buzzfeed News, November 3, 2016. https://www.buzzfeednews.com/article/craigsilverman/how-macedonia-became-a-global-hub-for-pro-trump-misinfo.

Simonov, Andrey, Szymon K Sacher, Jean-Pierre H Dubé, and Shirsho Biswas. 2020. “The Persuasive Effect of Fox News: Non-Compliance with Social Distancing During the Covid-19 Pandemic.” Working Paper 27237. Working Paper Series. National Bureau of Economic Research. https://doi.org/10.3386/w27237.

Slodkowski, Antonoi. 2018. “Facebook Bans Myanmar Army Chief, Others in Unprecedented Move.” Reuters, August 27, 2018. https://www.reuters.com/article/us-myanmar-facebook/facebook-bans-myanmar-army-chief-others-in-unprecedented-move-idUSKCN1LC0R7.

Sobieraj, Sarah. 2019. “Disinformation, Democracy, and the Social Costs of Identity-Based Attacks Online.” MediaWell, Social Science Research Council. October 22, 2019. https://mediawell.ssrc.org/expert-reflections/disinformation-democracy-and-the-social-costs-of-identity-based-attacks-online/.

Soltani, Parisa, and Romeo Patini. 2020. “Retracted COVID-19 Articles: A Side-Effect of the Hot Race to Publication.” Scientometrics 125 (1): 819–22. https://doi.org/10.1007/s11192-020-03661-9.

Starbird, Kate. 2017. “Examining the Alternative Media Ecosystem Through the Production of Alternative Narratives of Mass Shooting Events on Twitter.” In Proceedings of the Eleventh International AAAI Conference on Web and Social Media (ICWSM 2017), 230–239.

Stecklow, Steve. 2018. “Why Facebook Is Losing the War on Hate Speech in Myanmar.” Reuters, August 15, 2018. https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/.

Stein, Gabriel. 2020. Twitter. Sept. 14, 2020. https://twitter.com/gabestein/status/1305583476452794369.

Timberg, Craig, Tony Romm, Aaron C. Davis, and Elizabeth Dwoskin. 2019. “Secret Campaign to Use Russian-Inspired Tactics in 2017 Ala. Election Stirs Anxiety for Democrats.” Washington Post, January 6, 2019, sec. Technology. https://www.washingtonpost.com/business/technology/secret-campaign-to-use-russian-inspired-tactics-in-2017-alabama-election-stirs-anxiety-for-democrats/2019/01/06/58803f26-0400-11e9-8186-4ec26a485713_story.html.

Toosi, Nahal. 2019. “Venezuela Becomes Trump’s Latest Proxy Battle with Russia.” POLITICO, May 1, 2019.https://politi.co/2PFeyEE.

Toro, Francisco. 2019. “The Bitter Truth about What’s Happening in Venezuela.” Washington Post, May 1, 2019, sec. Global Opinions. https://www.washingtonpost.com/opinions/2019/05/01/caught-between-washington-moscow-venezuelans-cant-see-end-this-crisis/.

Tucker, Joshua A., Andrew Guess, Pablo Barberá, Cristian Vaccari, Alexandra Siegel, Sergey Sanovich, Denis Stukal, and Brendan Nyhan. 2018. “Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature.” The William and Flora Hewlett Foundation. https://hewlett.org/library/social-media-political-polarization-political-disinformation-review-scientific-literature/.

Usher, Nikki, and Matt Carlson. 2018. “The Midlife Crisis of the Network Society.” Media and Communication 6 (4): 107. https://doi.org/10.17645/mac.v6i4.1751.

Vaidhyanathan, Siva. 2012. The Googlization of Everything: (And Why We Should Worry). University of California Press.

———. 2018. Antisocial Media: How Facebook Disconnects Us and Undermines Democracy. Oxford University Press.

Van Klinken, Gerry, and Su Aung. 2017. “The Contentious Politics of Anti-Muslim Scapegoating in Myanmar.” Journal of Contemporary Asia 47 (March): 1–23. https://doi.org/10.1080/00472336.2017.1293133.

Vandewalker, Ian. 2020. “Digital Disinformation and Vote Suppression.” Brennan Center for Justice. https://www.brennancenter.org/our-work/research-reports/digital-disinformation-and-vote-suppression.

Waisbord, Silvio. 2013. Reinventing Professionalism: Journalism and News in Global Perspective. John Wiley & Sons.

Wardle, Claire, and Hossein Derakhshan. 2017. “Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making.” 162317GBR. Council of Europe. https://edoc.coe.int/en/media/7495-information-disorder-toward-an-interdisciplinary-framework-for-research-and-policy-making.html.

Warzel, Charlie. 2017. “We Sent Alex Jones’ Infowars Supplements to a Lab. Here’s What’s in Them.” Buzzfeed News, August 9, 2017. https://www.buzzfeednews.com/article/charliewarzel/we-sent-alex-jones-infowars-supplements-to-a-lab-heres.

Wessel, Michael, Ferdinand Thies, and Alexander Benlian. 2016. “The Emergence and Effects of Fake Social Information: Evidence from Crowdfunding.” Decision Support Systems 90 (October): 75–85. https://doi.org/10.1016/j.dss.2016.06.021.

Winneg, Kenneth M., Bruce W. Hardy, Jeffrey A. Gottfried, and Kathleen Hall Jamieson. 2014. “Deception in Third Party Advertising in the 2012 Presidential Campaign.” American Behavioral Scientist 58 (4): 524–35. https://doi.org/10.1177/0002764214524358.

Yashoda, Vishal. 2018. “Officials Blame WhatsApp for Spike in Mob Killings, but Indians Say Vicious Party Politics Are at Fault.” Global Voices (blog). August 2, 2018. https://globalvoices.org/2018/08/02/officials-blame-whatsapp-for-spike-in-mob-killings-but-indians-say-vicious-party-politics-are-at-fault/.

Ylä-Anttila, Tuukka. 2018. “Populist Knowledge: ‘Post-Truth’ Repertoires of Contesting Epistemic Authorities.” European Journal of Cultural and Political Sociology 5 (4): 356–88. https://doi.org/10.1080/23254823.2017.1414620.

Youn, Soo. 2020. “Black Women Are Being Targeted in Misinformation Campaigns, a Report Shows. Here’s What to Know.” Https://Www.Thelily.Com. Accessed October 23, 2020. https://www.thelily.com/black-women-are-being-targeted-in-misinformation-campaigns-a-report-shows-heres-what-to-know/.

Tags: , , ,