Research Review

Defining “Disinformation”

 

Key Points:

  • The academic community has gravitated towards a set of definitions that identifies disinformation as intentional, and misinformation as unintentional. Under this kind of definition, false narratives can be either dis- or misinformation, based on the intention of the spreader.
  • Because intent is notoriously difficult to determine in social science, it may be best to speak of dis- and misinformation together.
  • “Fake News” is seen as a highly problematic, politicized term, and most disinformation scholars tend to avoid it.
  • Some prominent scholars use the term “propaganda” to refer to specific kinds of communication, but the word has a long and complicated intellectual history.

Defining “disinformation” and related concepts

One of the most urgent tasks facing scholars and the public alike is identifying shared definitions of disinformation and related topics across academic disciplines. As Jack (2017) and Wardle and Derakhshan (2017) note, disinformation scholars, journalists, policymakers, and other workers in the arena of political communication employ a wide array of subtly different but equally loaded definitions. As Karlsen (2016) notes, there is no overarching theoretical framework for studying the concept of influence, only a patchwork of methodologies from various social science disciplines. Wardle and Derakhshan further argue that this lack of consensus and definitional rigor in terminology led to an early stagnation in the field of disinformation studies. All of the MediaWell research reviews draw on reports, conference papers, and other material in addition to peer-reviewed academic publications, but this one perhaps more than most. For a brief explanation of these types of content, please see our FAQ.

Wardle and Derakhshan’s own definition of disinformation (and by extension, misinformation) has gained ground in the community of scholars, activists, and nonprofits attempting to understand and mitigate these conditions. In their interpretation, disinformation—“information that is false and deliberately created to harm a person, social group, organization or country”—is one of three types of content contributing to “information disorder.” The others are misinformation, which they define as “information that is false, but not created with the intention of causing harm,” and malinformation, factual information released to discredit or harm a person or institution, such as doxing, leaks, and certain kinds of hate speech (2017, 20).

In this articulation, the crucial distinction between disinformation and misinformation lies in the question of intent. Producers of disinformation have made conscious decisions to propagate narratives that are false or misleading. On the other hand, while misinformation may begin as disinformation, it is spread with indifference to its truth value, or a lack of awareness that it is false. Misinformation may be spread to entertain, educate, or provoke. As such, a single false narrative or piece of content may cross this blurry line from disinformation to misinformation and back again, depending on its various sources and their perceptions of the truth or falsity of that narrative.

Wardle and Derakhshan are not alone in basing the distinction between disinformation and misinformation on intent. Other scholars have suggested similar definitions, such as Floridi (2011), Sþe (2018), and Hwang (2017), who includes “intentional actions” in his definition of disinformation. Bennett and Livingston (2018, 124) also describe disinformation as intentional, but limit its form to “intentional falsehoods spread as news stories or simulated documentary formats to advance political goals.” While Jack (2017, 3) defines disinformation as “deliberately false or misleading,” which would imply intent, she warns that an intention-based definition contributes to a power imbalance between producers of disinformation and their critics. Journalists and social scientists, she writes, tend to refrain from accusations of intent due to professional codes and legal constraints. Creators of disinformation have no such constraints, while for potential critics like journalists those professional and legal threats persist.

Intent is also difficult for social scientists to assess, though they have some options. Ethnographic observation, surveys, and experiments that capture motivations for sharing disinformation versus misinformation versus factual information can all help assess intent, but it remains hard to gauge. In that light, how useful are definitions that rely on distinctions that are difficult to measure? The question of intent is further complicated by discursive styles prevalent on the English-language internet—especially among trolls—that make intent simultaneously claimable and deniable. One such style is popularly termed “irony.” Marwick (2018), pointing to “the dominance of irony as an expressive and affective force in native internet content,” argues that it can make intent-based distinctions misleading. Witness the early 2019 example of a Chicago Cubs fan who was accused of flashing a white-supremacy hand symbol behind African American commentator Doug Glanville. The symbol began as a 4chan hoax to “trigger” liberals by assigning hateful meanings to innocuous signs. The gesture has since expanded to wider use on the political right—including by some avowed white supremacists. Though its meanings remain contested and ambiguous, the Cubs felt those meanings were clear enough to ban the fan from the ballpark (Anti-Defamation League 2018; Redford 2019).

Skyrms (2010) bases his definition of disinformation (though he uses the term “deception”) on the idea of a cost to the recipient and a benefit to the sender, avoiding the question of intent. In a critique of Skyrms’ definition, Fallis (2015) argues that it is too narrow, as some disinformation—like false compliments—may actually benefit the recipient. Fallis himself takes a subtly different approach to defining disinformation in an attempt to avoid both the question of intent and the idea of benefits. Fallis’s definition depends on the concept of function, which is a quality that a thing acquires, whether through evolution or the intent of a designer. His short definition reads: “disinformation is misleading information that has the functionof misleading someone” (2015, 413; italics in original). This approach, Fallis argues, incorporates cases of disinformation that are intended to be misleading—such as lies—but also cases where the source benefits from misleading, such as conspiracy theories or shitposting.

Other proposed definitions for mis- and disinformation and related topics nod toward other characteristics. Gelfert (2018, 103), in a discussion of “fake news” as a subset of disinformation, argues that it is distinguished by “systemic features inherent in the design of the sources and mechanisms that give rise to them,” and that “fake news” must both mislead and do so as a consequence of its design. Gelfert suggests such systemic features might include manipulating consumers’ confirmation biases, repeating similar false narratives to render them more persuasive, and attempting to intensify consumers’ partisanship. Those systemic features work to inhibit critical reasoning and inquiry, Gelfert argues, and encourage users to further disseminate such content. While Gelfert’s articulation still relies on intent, a definition of disinformation that incorporates systemic features of sources and channels may have potential.

To readers outside of academia, these debates and subtle distinctions over disinformation and misinformation may seem entirely 
 academic. Perhaps, as with US Supreme Court Justice Potter Stewart’s famous observation about pornography, we should satisfy ourselves with knowing disinformation when we see it. However, the political climates in many parts of the world clearly demonstrate that different groups can interpret the same sets of facts very differently, and reveal how social circumstances, identity, and background shape different epistemologies—theories of what knowledge is—within and between social contexts. This is not to say that academics and researchers should content themselves with esoteric definitions that satisfy them but do not reflect the understandings of broader elements of society—quite the contrary. Nielsen and Graves (2017) offer a useful reminder that academics, researchers, and policymakers think about some of these terms and concepts in very different ways than members of mass-media audiences. In their factsheet reporting findings from a survey and a series of focus groups on “fake news,” a concept we discuss further below, they report that news consumers “see the difference between fake news and news as one of degree rather than a clear distinction.” Their respondents treated “fake news” as a diverse class of information forms, and also recognized that the term had been weaponized. Many respondents included sensationalist journalism, hyperpartisan news, some kinds of advertising, and other types of content on the information spectrum. Nielsen and Graves argue that audiences’ considerations have been left out of the academic and policy debates, and that audiences express “a deeper discontent with many public sources of information, including news media and politicians as well as platform companies” (2017, 1, 7). This argument reflects a much broader societal issue, as well as a disconnect between academics, policymakers, and people in other walks of life. We hope that MediaWell and similar projects can begin to bridge this gap by translating academic knowledge for mass audiences, while at the same time serving as a portal for academics, policymakers, and technologists to share information that reflects how people understand political information in their daily lives.

It’s important to remember as well that the existence of disinformation is medium-agnostic, even though the ways disinformation is created, shared, and received can be heavily influenced by medium. That is to say, disinformation is not limited to text. It exists in video and audio forms, such as YouTube videos and podcasts, and can certainly be spread by word-of-mouth. Visual disinformation can be particularly problematic, as it can be both more persuasive and harder to debunk. However, to date, many studies and corrective efforts have been focused on text (Wardle and Derakhshan 2017).

So where does all of this leave us in our definition? As Fallis (2015, 416) discusses in a response to Floridi (2012), disinformation may not be perfectly definable. “There may simply be prototypical instances of disinformation, with different things falling closer to or further from these prototypes,” he writes.

Nevertheless, we know that social actors are producing disinformation, and that it spreads through societies as both disinformation and misinformation (assuming we accept some sort of distinction between them). In that light, it may be most useful to continue to discuss dis- and misinformation together, or use a blanket term such as “problematic information,” which Jack (2017) employs to encompass both mis- and disinformation, themselves distinguished by intention. In keeping with the consensus emerging from Jack, Wardle and Derakhshan, and other authors, the MediaWell project provisionally defines “disinformation” as: a rhetorical strategy that produces and disseminates false or misleading information in a deliberate effort to confuse, influence, harm, mobilize, or demobilize a target audience. As its bedfellow, misinformation could then be defined as false or misleading information, spread unintentionally, that tends to confuse, influence, harm, mobilize, or demobilize an audience. We recognize the limitations of an intention-based distinction between dis- and misinformation, and suggest that they be considered together in a way that allows for their mutability. At the risk of some clunky sentences, we try to speak of “dis- and misinformation” unless the specifics of the communication are known.

Fake news and junk news

The term “fake news” is itself quite problematic, as the research mentioned above by Nielsen and Graves (2017) reflects. Wardle and Derakhshan, Jack, and many other observers eschew it, arguing that the phrase is both imprecise and politically loaded, and that autocratic leaders have begun to weaponize the term “fake news” to stifle dissent and delegitimize criticism of their regimes and allies (Jack 2017; Quandt et al. 2019; Sullivan 2017; Wardle and Derakhshan 2017; Zuckerman 2017). Moreover, as Tandoc, Lim, and Ling (2018) discuss, any critical assessment of the term “fake news” demands that we first determine what non-fake “news” is—and that is a surprisingly difficult, subjective, and highly contextual endeavor.[1] In a slightly different vein, Farkas and Schou (2018) suggest that the many uses and meanings of the term “fake news” indicate that it has become part of a larger struggle over the nature of politics and social realities in an environment of colliding worldviews.

The term “junk news” has seen some use among scholars hoping to describe the phenomenon without the additional political payload of “fake news.” Narayanan et al. (2018) describe junk news in their report as “various forms of extremist, sensationalist, conspiratorial, masked commentary, fake news and other forms,” while Venturini (2019) suggests that junk news should be defined by its virality rather than its falsity. The idea of false news, he argues, supposes that real news reproduces reality, while in fact it is a mediated, framed journalistic product. Junk news is addictive, and “dangerous not because it is false, but because it saturates public debate.” So far, however, the term “junk news” has not seen widespread adoption by the social science community.

Propaganda

A related concept to disinformation, the term “propaganda” is employed to describe the contemporary information environment by prominent scholars such as Benkler, Faris, and Roberts (2018) and Howard (Howard and Kollanyi 2016; Woolley and Howard 2018). Benkler, Faris, and Roberts briefly define propaganda as “the intentional manipulation of beliefs,” and more elaborately as “communication designed to manipulate a target population by affecting its beliefs, attitudes, or preferences in order to obtain behavior compliant with political goals of the propagandist” (2018, 6, 29). Such a definition is broader than disinformation, as it would include factually true information framed in such a way as to obtain compliance. Both “propaganda” and “disinformation” could also include paltering, the use of factually true statements that nonetheless constitute an active deception (Gino 2016).

The term “propaganda” has a complicated history, especially for US audiences. The roots of the term date back hundreds of years, at least, and relate to the spread of religious doctrine. It did not acquire significantly negative meanings until the World Wars. Since then, rivals have frequently labeled their opponents’ messages as propaganda in line with these negative understandings (Auerbach and Castronovo 2013; Jack 2017).

Benkler, Faris, and Roberts’s use of the term propaganda stands in contrast to Jacques Ellul’s reconsideration of propaganda as a “sociological phenomenon” in technological society, one that encompasses both psychological warfare and the subtler forces that encourage conformity, the educations and socializations that “seek to adapt the individual to a society, to a living standard, to an activity” (1973, xii–xiii, xvii). In Ellul’s articulation, propaganda encompasses both deliberate political propaganda and what he calls “sociological” propaganda, the unconsciously reproduced messages in Hallmark cards, gym memberships, junk mail, Oscar ceremonies, cereal boxes, tax forms, this website 
 the list goes on. These are the messages that condition individuals not only to conform to societies but also to believe that their societies’ ways of life are good and superior. In short, in this understanding, these messages allow large societies to function at the scale where individuals do not know one another (1973, 61–70).

Benkler, Faris, and Roberts exclude this sociological propaganda from their articulation, instead returning “propaganda” to its more bounded meanings of overtly political and intentionally persuasive communication. They follow Jowett and O’Donnell (1992, 2011) in their intellectual history of propaganda, which treats it as a type of communication that differs from persuasion; where persuasion advances the aims of both communicants, propaganda only advances the aims of the propagandist. In other words, both communicants can benefit from persuasion, which is a reciprocal process based on interaction and deliberation, and which can promote mutual understanding. Propagandists, in this understanding, are self-motivated, and do not attempt to benefit their audience. Their true motives and identities may be concealed, and they do not attempt to develop mutual understanding.

Trolling

Social scientists have taken various approaches to studying the phenomenon of trolling, ranging from online ethnographic research to psychological profiles. Trolling has been compared to bullying, but also described as a purposeful disruption of a community or conversation for lulz. One of the complicating factors in establishing a definition is that trolling has changed as the internet has changed—it is now more formalized, with shared language and identity, and some within the subculture see it as performance art (Higgin 2013; Phillips 2011, 2012).

From a psychological approach, Buckels, Trapnell, and Paulhus (2014, 97) found that trolling “correlated positively with sadism, psychopathy, and Machiavellianism,” elements of the so-called Dark Tetrad of personality. They define trolling as “deceptive, destructive, or disruptive [behavior] 
 with no apparent instrumental purpose.” However, some behavior that has come to be called “trolling” does have an instrumental purpose—at least one that is apparent to the trolls themselves. Russian and Iranian online influence campaigns have been referred to as “troll farms” (Nimmo, Brookie, and Karan 2018). Their intentions and methods varied, but both targeted polarized online communities to further national goals (for more on these operations, see our “Election Interference” literature review). Whether these operations and their agents are close enough to more traditional forms of trolling for lulz, or whether they merit their own distinct terminology, is an open question.

Our grateful acknowledgement to Connie Moon Sehat, Kris-Stella Trump and Lauren Weinzimmer for their feedback during the writing process for this research review.

[1] For three very different theoretical approaches to the question of what “news” is, we recommend Schudson (2011), The Sociology of News; Hardt (2004), Myths for the Masses; and Herman and Chomsky (1988), Manufacturing Consent.

Works Cited

Anti-Defamation League. 2018. “How the ‘OK’ Symbol Became a Popular Trolling Gesture.” Anti-Defamation League. Updated September 5, 2018. https://www.adl.org/blog/how-the-ok-symbol-became-a-popular-trolling-gesture.

Auerbach, Jonathan, and Russ Castronovo. 2013. “Introduction: Thirteen Propositions About Propaganda.” In The Oxford Handbook of Propaganda Studies, edited by Jonathan Auerbach and Russ Castronovo.

Benkler, Yochai, Robert Faris, and Hal Roberts. 2018. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford University Press.

Bennett, W Lance, and Steven Livingston. 2018. “The Disinformation Order: Disruptive Communication and the Decline of Democratic Institutions.” European Journal of Communication33 (2): 122–39. https://doi.org/10.1177/0267323118760317.

Buckels, Erin E., Paul D. Trapnell, and Delroy L. Paulhus. 2014. “Trolls Just Want to Have Fun.” Personality and Individual Differences, The Dark Triad of Personality, 67 (September): 97–102. https://doi.org/10.1016/j.paid.2014.01.016.

Ellul, Jacques. 1973. Propaganda: The Formation of Men’s Attitudes. Translated by Konrad Kellen and Jean Lerner. New York: Vintage.

Fallis, Don. 2015. “What Is Disinformation?” Library Trends63 (3): 401–26. https://doi.org/10.1353/lib.2015.0014.

Farkas, Johan, and Jannick Schou. 2018. “Fake News as a Floating Signifier: Hegemony, Antagonism and the Politics of Falsehood.” Javnost – The Public25 (3): 298–314. https://doi.org/10.1080/13183222.2018.1463047.

Floridi, Luciano. 2011. The Philosophy of Information. Oxford University Press.

———. 2012. “Steps Forward in the Philosophy of Information.” Etica E Politica14 (1): 304–310.

Gelfert, Axel. 2018. “Fake News: A Definition.” Informal Logic38 (1): 84–117. https://doi.org/10.22329/il.v38i1.5068.

Gino, Francesca. 2016. “There’s a Word for Using Truthful Facts to Deceive: Paltering.” Harvard Business Review, October 5, 2016. https://hbr.org/2016/10/theres-a-word-for-using-truthful-facts-to-deceive-paltering.

Higgin, Tanner. 2013. “FCJ-159 /b/Lack up: What Trolls Can Teach Us About Race.” The Fibreculture Journal159 (22). http://twentytwo.fibreculturejournal.org/fcj-159-black-up-what-trolls-can-teach-us-about-race/.

Howard, Philip N., and Bence Kollanyi. 2016. “Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendum.” SSRN, June. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2798311.

Hwang, Tim. 2017. “Digital Disinformation: A Primer.” Atlantic Council. https://www.atlanticcouncil.org/publications/articles/digital-disinformation-a-primer.

Jack, Caroline. 2017. “Lexicon of Lies: Terms for Problematic Information.” Data & Society.

Jowett, Garth, and Victoria O’Donnell. 1992. Propaganda and Persuasion. Sage Publications.

Jowett, Garth S., and Victoria O’Donnell. 2011. Propaganda & Persuasion. SAGE Publications.

Karlsen, Geir HĂ„gen. 2016. “Tools of Russian Influence: Information and Propaganda.” In Ukraine and Beyond: Russia’s Strategic Security Challenge to Europe, edited by Janne Haaland Matlary and Tormod Heier, 181–208. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-32530-9_9.

Marwick, Alice E. 2018. “Why Do People Share Fake News? A Sociotechnical Model of Media Effects.” Georgetown Law Technology Review474.

Narayanan, Vidya, Vlad Barash, John Kelly, Bence Kollanyi, Lisa-Maria Neudert, and Philip N. Howard. 2018. “Polarization, Partisanship and Junk News Consumption over Social Media in the US.” Oxford Internet Institute. http://arxiv.org/abs/1803.01845v1.

Nielsen, Rasmus Kleis, and Lucas Graves. 2017. “‘News You Don’t Believe’: Audience Perspectives on Fake News.” Reuters Institute. https://reutersinstitute.politics.ox.ac.uk/our-research/news-you-dont-believe-audience-perspectives-fake-news.

Nimmo, Ben, Graham Brookie, and Kanishk Karan. 2018. “#TrollTracker: Twitter Troll Farm Archives – Part Two.” Disinfo Portal(blog). October 17, 2018. https://disinfoportal.org/trolltracker-twitter-troll-farm-archives-part-two/.

Phillips, Whitney. 2011. “LOLing at Tragedy: Facebook Trolls, Memorial Pages and Resistance to Grief Online.” First Monday 16 (12). https://doi.org/10.5210/fm.v16i12.3168.

———. 2012. “This Is Why We Can’t Have Nice Things: The Origins, Evolution and Cultural Embeddedness of Online Trolling.” PhD diss., University of Oregon. Proquest LLC.  https://search.proquest.com/openview/b2a7a4c485c5d4afef6965aca6d2362c/1?pq-origsite=gscholar&cbl=18750&diss=y.

Quandt, Thorsten, Lena Frischlich, Svenja Boberg, and Tim Schatto‐Eckrodt. 2019. “Fake News.” In The International Encyclopedia of Journalism Studies, 1–6. John Wiley & Sons, Inc. https://doi.org/10.1002/9781118841570.iejs0128.

Redford, Patrick. 2019. “Cubs Fan Using ‘OK’ Hand Gesture Behind Doug Glanville Gets Banned Indefinitely.” Deadspin(blog). May 8, 2019. https://deadspin.com/cubs-fan-using-ok-hand-gesture-behind-doug-glanville-1834615036.

Skyrms, Brian. 2010. Signals: Evolution, Learning, and Information. Oxford University Press. https://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199580828.001.0001/acprof-9780199580828.

Sþe, Sille Obelitz. 2018. “Algorithmic Detection of Misinformation and Disinformation: Gricean Perspectives.” Journal of Documentation74 (2): 309–32. https://doi.org/10.1108/JD-05-2017-0075.

Sullivan, Margaret. 2017. “Perspective | It’s Time to Retire the Tainted Term ‘Fake News.’” Washington Post, January 8, 2017. https://www.washingtonpost.com/lifestyle/style/its-time-to-retire-the-tainted-term-fake-news/2017/01/06/a5a7516c-d375-11e6-945a-76f69a399dd5_story.html.

Tandoc, Edson C., Zheng Wei Lim, and Richard Ling. 2018. “Defining ‘Fake News.’” Digital Journalism6 (2): 137–53. https://doi.org/10.1080/21670811.2017.1360143.

Venturini, Tommaso. 2019. “From Fake to Junk News.” In Data Politics: Worlds, Subjects, Rights, edited by Didier Bigo, Engin Isin, and Evelyn Ruppert. Routledge.

Wardle, Claire, and Hossein Derakhshan. 2017. “Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making.” 162317GBR. Council of Europe. https://edoc.coe.int/en/media/7495-information-disorder-toward-an-interdisciplinary-framework-for-research-and-policy-making.html.

Woolley, Samuel C., and Philip N. Howard. 2018. Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. Oxford University Press.

Zuckerman, Ethan. 2017. “Stop Saying ‘Fake News’. It’s Not Helping.” … My Heart’s in Accra(blog). January 30, 2017. http://www.ethanzuckerman.com/blog/2017/01/30/stop-saying-fake-news-its-not-helping/.