Key Points:
- The academic community has gravitated towards a set of definitions that identifies disinformation as intentional, and misinformation as unintentional. Under this kind of definition, false narratives can be either dis- or misinformation, based on the intention of the spreader.
- Because intent is notoriously difficult to determine in social science, it may be best to speak of dis- and misinformation together.
- “Fake News” is seen as a highly problematic, politicized term, and most disinformation scholars tend to avoid it.
- Some prominent scholars use the term “propaganda” to refer to specific kinds of communication, but the word has a long and complicated intellectual history.
Defining âdisinformationâ and related concepts
One of the most urgent tasks facing scholars and the public alike is identifying shared definitions of disinformation and related topics across academic disciplines. As Jack (2017) and Wardle and Derakhshan (2017) note, disinformation scholars, journalists, policymakers, and other workers in the arena of political communication employ a wide array of subtly different but equally loaded definitions. As Karlsen (2016) notes, there is no overarching theoretical framework for studying the concept of influence, only a patchwork of methodologies from various social science disciplines. Wardle and Derakhshan further argue that this lack of consensus and definitional rigor in terminology led to an early stagnation in the field of disinformation studies. All of the MediaWell research reviews draw on reports, conference papers, and other material in addition to peer-reviewed academic publications, but this one perhaps more than most. For a brief explanation of these types of content, please see our FAQ.
Wardle and Derakhshanâs own definition of disinformation (and by extension, misinformation) has gained ground in the community of scholars, activists, and nonprofits attempting to understand and mitigate these conditions. In their interpretation, disinformationââinformation that is false and deliberately created to harm a person, social group, organization or countryââis one of three types of content contributing to âinformation disorder.â The others are misinformation, which they define as âinformation that is false, but not created with the intention of causing harm,â and malinformation, factual information released to discredit or harm a person or institution, such as doxing, leaks, and certain kinds of hate speech (2017, 20).
In this articulation, the crucial distinction between disinformation and misinformation lies in the question of intent. Producers of disinformation have made conscious decisions to propagate narratives that are false or misleading. On the other hand, while misinformation may begin as disinformation, it is spread with indifference to its truth value, or a lack of awareness that it is false. Misinformation may be spread to entertain, educate, or provoke. As such, a single false narrative or piece of content may cross this blurry line from disinformation to misinformation and back again, depending on its various sources and their perceptions of the truth or falsity of that narrative.
Wardle and Derakhshan are not alone in basing the distinction between disinformation and misinformation on intent. Other scholars have suggested similar definitions, such as Floridi (2011), SĂže (2018), and Hwang (2017), who includes âintentional actionsâ in his definition of disinformation. Bennett and Livingston (2018, 124) also describe disinformation as intentional, but limit its form to âintentional falsehoods spread as news stories or simulated documentary formats to advance political goals.â While Jack (2017, 3) defines disinformation as âdeliberately false or misleading,â which would imply intent, she warns that an intention-based definition contributes to a power imbalance between producers of disinformation and their critics. Journalists and social scientists, she writes, tend to refrain from accusations of intent due to professional codes and legal constraints. Creators of disinformation have no such constraints, while for potential critics like journalists those professional and legal threats persist.
Intent is also difficult for social scientists to assess, though they have some options. Ethnographic observation, surveys, and experiments that capture motivations for sharing disinformation versus misinformation versus factual information can all help assess intent, but it remains hard to gauge. In that light, how useful are definitions that rely on distinctions that are difficult to measure? The question of intent is further complicated by discursive styles prevalent on the English-language internetâespecially among trollsâthat make intent simultaneously claimable and deniable. One such style is popularly termed âirony.â Marwick (2018), pointing to âthe dominance of irony as an expressive and affective force in native internet content,â argues that it can make intent-based distinctions misleading. Witness the early 2019 example of a Chicago Cubs fan who was accused of flashing a white-supremacy hand symbol behind African American commentator Doug Glanville. The symbol began as a 4chan hoax to âtriggerâ liberals by assigning hateful meanings to innocuous signs. The gesture has since expanded to wider use on the political rightâincluding by some avowed white supremacists. Though its meanings remain contested and ambiguous, the Cubs felt those meanings were clear enough to ban the fan from the ballpark (Anti-Defamation League 2018; Redford 2019).
Skyrms (2010) bases his definition of disinformation (though he uses the term âdeceptionâ) on the idea of a cost to the recipient and a benefit to the sender, avoiding the question of intent. In a critique of Skyrmsâ definition, Fallis (2015) argues that it is too narrow, as some disinformationâlike false complimentsâmay actually benefit the recipient. Fallis himself takes a subtly different approach to defining disinformation in an attempt to avoid both the question of intent and the idea of benefits. Fallisâs definition depends on the concept of function, which is a quality that a thing acquires, whether through evolution or the intent of a designer. His short definition reads: âdisinformation is misleading information that has the functionof misleading someoneâ (2015, 413; italics in original). This approach, Fallis argues, incorporates cases of disinformation that are intended to be misleadingâsuch as liesâbut also cases where the source benefits from misleading, such as conspiracy theories or shitposting.
Other proposed definitions for mis- and disinformation and related topics nod toward other characteristics. Gelfert (2018, 103), in a discussion of âfake newsâ as a subset of disinformation, argues that it is distinguished by âsystemic features inherent in the design of the sources and mechanisms that give rise to them,â and that âfake newsâ must both mislead and do so as a consequence of its design. Gelfert suggests such systemic features might include manipulating consumersâ confirmation biases, repeating similar false narratives to render them more persuasive, and attempting to intensify consumersâ partisanship. Those systemic features work to inhibit critical reasoning and inquiry, Gelfert argues, and encourage users to further disseminate such content. While Gelfertâs articulation still relies on intent, a definition of disinformation that incorporates systemic features of sources and channels may have potential.
To readers outside of academia, these debates and subtle distinctions over disinformation and misinformation may seem entirely ⊠academic. Perhaps, as with US Supreme Court Justice Potter Stewartâs famous observation about pornography, we should satisfy ourselves with knowing disinformation when we see it. However, the political climates in many parts of the world clearly demonstrate that different groups can interpret the same sets of facts very differently, and reveal how social circumstances, identity, and background shape different epistemologiesâtheories of what knowledge isâwithin and between social contexts. This is not to say that academics and researchers should content themselves with esoteric definitions that satisfy them but do not reflect the understandings of broader elements of societyâquite the contrary. Nielsen and Graves (2017) offer a useful reminder that academics, researchers, and policymakers think about some of these terms and concepts in very different ways than members of mass-media audiences. In their factsheet reporting findings from a survey and a series of focus groups on âfake news,â a concept we discuss further below, they report that news consumers âsee the difference between fake news and news as one of degree rather than a clear distinction.â Their respondents treated âfake newsâ as a diverse class of information forms, and also recognized that the term had been weaponized. Many respondents included sensationalist journalism, hyperpartisan news, some kinds of advertising, and other types of content on the information spectrum. Nielsen and Graves argue that audiencesâ considerations have been left out of the academic and policy debates, and that audiences express âa deeper discontent with many public sources of information, including news media and politicians as well as platform companiesâ (2017, 1, 7). This argument reflects a much broader societal issue, as well as a disconnect between academics, policymakers, and people in other walks of life. We hope that MediaWell and similar projects can begin to bridge this gap by translating academic knowledge for mass audiences, while at the same time serving as a portal for academics, policymakers, and technologists to share information that reflects how people understand political information in their daily lives.
Itâs important to remember as well that the existence of disinformation is medium-agnostic, even though the ways disinformation is created, shared, and received can be heavily influenced by medium. That is to say, disinformation is not limited to text. It exists in video and audio forms, such as YouTube videos and podcasts, and can certainly be spread by word-of-mouth. Visual disinformation can be particularly problematic, as it can be both more persuasive and harder to debunk. However, to date, many studies and corrective efforts have been focused on text (Wardle and Derakhshan 2017).
So where does all of this leave us in our definition? As Fallis (2015, 416) discusses in a response to Floridi (2012), disinformation may not be perfectly definable. âThere may simply be prototypical instances of disinformation, with different things falling closer to or further from these prototypes,â he writes.
Nevertheless, we know that social actors are producing disinformation, and that it spreads through societies as both disinformation and misinformation (assuming we accept some sort of distinction between them). In that light, it may be most useful to continue to discuss dis- and misinformation together, or use a blanket term such as âproblematic information,â which Jack (2017) employs to encompass both mis- and disinformation, themselves distinguished by intention. In keeping with the consensus emerging from Jack, Wardle and Derakhshan, and other authors, the MediaWell project provisionally defines âdisinformationâ as: a rhetorical strategy that produces and disseminates false or misleading information in a deliberate effort to confuse, influence, harm, mobilize, or demobilize a target audience. As its bedfellow, misinformation could then be defined as false or misleading information, spread unintentionally, that tends to confuse, influence, harm, mobilize, or demobilize an audience. We recognize the limitations of an intention-based distinction between dis- and misinformation, and suggest that they be considered together in a way that allows for their mutability. At the risk of some clunky sentences, we try to speak of âdis- and misinformationâ unless the specifics of the communication are known.
Fake news and junk news
The term âfake newsâ is itself quite problematic, as the research mentioned above by Nielsen and Graves (2017) reflects. Wardle and Derakhshan, Jack, and many other observers eschew it, arguing that the phrase is both imprecise and politically loaded, and that autocratic leaders have begun to weaponize the term âfake newsâ to stifle dissent and delegitimize criticism of their regimes and allies (Jack 2017; Quandt et al. 2019; Sullivan 2017; Wardle and Derakhshan 2017; Zuckerman 2017). Moreover, as Tandoc, Lim, and Ling (2018) discuss, any critical assessment of the term âfake newsâ demands that we first determine what non-fake ânewsâ isâand that is a surprisingly difficult, subjective, and highly contextual endeavor.[1] In a slightly different vein, Farkas and Schou (2018) suggest that the many uses and meanings of the term âfake newsâ indicate that it has become part of a larger struggle over the nature of politics and social realities in an environment of colliding worldviews.
The term âjunk newsâ has seen some use among scholars hoping to describe the phenomenon without the additional political payload of âfake news.â Narayanan et al. (2018) describe junk news in their report as âvarious forms of extremist, sensationalist, conspiratorial, masked commentary, fake news and other forms,â while Venturini (2019) suggests that junk news should be defined by its virality rather than its falsity. The idea of false news, he argues, supposes that real news reproduces reality, while in fact it is a mediated, framed journalistic product. Junk news is addictive, and âdangerous not because it is false, but because it saturates public debate.â So far, however, the term âjunk newsâ has not seen widespread adoption by the social science community.
Propaganda
A related concept to disinformation, the term âpropagandaâ is employed to describe the contemporary information environment by prominent scholars such as Benkler, Faris, and Roberts (2018) and Howard (Howard and Kollanyi 2016; Woolley and Howard 2018). Benkler, Faris, and Roberts briefly define propaganda as âthe intentional manipulation of beliefs,â and more elaborately as âcommunication designed to manipulate a target population by affecting its beliefs, attitudes, or preferences in order to obtain behavior compliant with political goals of the propagandistâ (2018, 6, 29). Such a definition is broader than disinformation, as it would include factually true information framed in such a way as to obtain compliance. Both âpropagandaâ and âdisinformationâ could also include paltering, the use of factually true statements that nonetheless constitute an active deception (Gino 2016).
The term âpropagandaâ has a complicated history, especially for US audiences. The roots of the term date back hundreds of years, at least, and relate to the spread of religious doctrine. It did not acquire significantly negative meanings until the World Wars. Since then, rivals have frequently labeled their opponentsâ messages as propaganda in line with these negative understandings (Auerbach and Castronovo 2013; Jack 2017).
Benkler, Faris, and Robertsâs use of the term propaganda stands in contrast to Jacques Ellulâs reconsideration of propaganda as a âsociological phenomenonâ in technological society, one that encompasses both psychological warfare and the subtler forces that encourage conformity, the educations and socializations that âseek to adapt the individual to a society, to a living standard, to an activityâ (1973, xiiâxiii, xvii). In Ellulâs articulation, propaganda encompasses both deliberate political propaganda and what he calls âsociologicalâ propaganda, the unconsciously reproduced messages in Hallmark cards, gym memberships, junk mail, Oscar ceremonies, cereal boxes, tax forms, this website ⊠the list goes on. These are the messages that condition individuals not only to conform to societies but also to believe that their societiesâ ways of life are good and superior. In short, in this understanding, these messages allow large societies to function at the scale where individuals do not know one another (1973, 61â70).
Benkler, Faris, and Roberts exclude this sociological propaganda from their articulation, instead returning âpropagandaâ to its more bounded meanings of overtly political and intentionally persuasive communication. They follow Jowett and OâDonnell (1992, 2011) in their intellectual history of propaganda, which treats it as a type of communication that differs from persuasion; where persuasion advances the aims of both communicants, propaganda only advances the aims of the propagandist. In other words, both communicants can benefit from persuasion, which is a reciprocal process based on interaction and deliberation, and which can promote mutual understanding. Propagandists, in this understanding, are self-motivated, and do not attempt to benefit their audience. Their true motives and identities may be concealed, and they do not attempt to develop mutual understanding.
Trolling
Social scientists have taken various approaches to studying the phenomenon of trolling, ranging from online ethnographic research to psychological profiles. Trolling has been compared to bullying, but also described as a purposeful disruption of a community or conversation for lulz. One of the complicating factors in establishing a definition is that trolling has changed as the internet has changedâit is now more formalized, with shared language and identity, and some within the subculture see it as performance art (Higgin 2013; Phillips 2011, 2012).
From a psychological approach, Buckels, Trapnell, and Paulhus (2014, 97) found that trolling âcorrelated positively with sadism, psychopathy, and Machiavellianism,â elements of the so-called Dark Tetrad of personality. They define trolling as âdeceptive, destructive, or disruptive [behavior] ⊠with no apparent instrumental purpose.â However, some behavior that has come to be called âtrollingâ does have an instrumental purposeâat least one that is apparent to the trolls themselves. Russian and Iranian online influence campaigns have been referred to as âtroll farmsâ (Nimmo, Brookie, and Karan 2018). Their intentions and methods varied, but both targeted polarized online communities to further national goals (for more on these operations, see our âElection Interferenceâ literature review). Whether these operations and their agents are close enough to more traditional forms of trolling for lulz, or whether they merit their own distinct terminology, is an open question.
Our grateful acknowledgement to Connie Moon Sehat, Kris-Stella Trump and Lauren Weinzimmer for their feedback during the writing process for this research review.
[1] For three very different theoretical approaches to the question of what ânewsâ is, we recommend Schudson (2011), The Sociology of News; Hardt (2004), Myths for the Masses; and Herman and Chomsky (1988), Manufacturing Consent.
Works Cited
Anti-Defamation League. 2018. âHow the âOKâ Symbol Became a Popular Trolling Gesture.â Anti-Defamation League. Updated September 5, 2018. https://www.adl.org/blog/how-the-ok-symbol-became-a-popular-trolling-gesture.
Auerbach, Jonathan, and Russ Castronovo. 2013. âIntroduction: Thirteen Propositions About Propaganda.â In The Oxford Handbook of Propaganda Studies, edited by Jonathan Auerbach and Russ Castronovo.
Benkler, Yochai, Robert Faris, and Hal Roberts. 2018. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford University Press.
Bennett, W Lance, and Steven Livingston. 2018. âThe Disinformation Order: Disruptive Communication and the Decline of Democratic Institutions.â European Journal of Communication33 (2): 122â39. https://doi.org/10.1177/0267323118760317.
Buckels, Erin E., Paul D. Trapnell, and Delroy L. Paulhus. 2014. âTrolls Just Want to Have Fun.â Personality and Individual Differences, The Dark Triad of Personality, 67 (September): 97â102. https://doi.org/10.1016/j.paid.2014.01.016.
Ellul, Jacques. 1973. Propaganda: The Formation of Menâs Attitudes. Translated by Konrad Kellen and Jean Lerner. New York: Vintage.
Fallis, Don. 2015. âWhat Is Disinformation?â Library Trends63 (3): 401â26. https://doi.org/10.1353/lib.2015.0014.
Farkas, Johan, and Jannick Schou. 2018. âFake News as a Floating Signifier: Hegemony, Antagonism and the Politics of Falsehood.â Javnost – The Public25 (3): 298â314. https://doi.org/10.1080/13183222.2018.1463047.
Floridi, Luciano. 2011. The Philosophy of Information. Oxford University Press.
âââ. 2012. âSteps Forward in the Philosophy of Information.â Etica E Politica14 (1): 304â310.
Gelfert, Axel. 2018. âFake News: A Definition.â Informal Logic38 (1): 84â117. https://doi.org/10.22329/il.v38i1.5068.
Gino, Francesca. 2016. âThereâs a Word for Using Truthful Facts to Deceive: Paltering.â Harvard Business Review, October 5, 2016. https://hbr.org/2016/10/theres-a-word-for-using-truthful-facts-to-deceive-paltering.
Higgin, Tanner. 2013. âFCJ-159 /b/Lack up: What Trolls Can Teach Us About Race.â The Fibreculture Journal159 (22). http://twentytwo.fibreculturejournal.org/fcj-159-black-up-what-trolls-can-teach-us-about-race/.
Howard, Philip N., and Bence Kollanyi. 2016. âBots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendum.â SSRN, June. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2798311.
Hwang, Tim. 2017. âDigital Disinformation: A Primer.â Atlantic Council. https://www.atlanticcouncil.org/publications/articles/digital-disinformation-a-primer.
Jack, Caroline. 2017. âLexicon of Lies: Terms for Problematic Information.â Data & Society.
Jowett, Garth, and Victoria OâDonnell. 1992. Propaganda and Persuasion. Sage Publications.
Jowett, Garth S., and Victoria OâDonnell. 2011. Propaganda & Persuasion. SAGE Publications.
Karlsen, Geir HĂ„gen. 2016. âTools of Russian Influence: Information and Propaganda.â In Ukraine and Beyond: Russiaâs Strategic Security Challenge to Europe, edited by Janne Haaland Matlary and Tormod Heier, 181â208. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-32530-9_9.
Marwick, Alice E. 2018. âWhy Do People Share Fake News? A Sociotechnical Model of Media Effects.â Georgetown Law Technology Review474.
Narayanan, Vidya, Vlad Barash, John Kelly, Bence Kollanyi, Lisa-Maria Neudert, and Philip N. Howard. 2018. âPolarization, Partisanship and Junk News Consumption over Social Media in the US.â Oxford Internet Institute. http://arxiv.org/abs/1803.01845v1.
Nielsen, Rasmus Kleis, and Lucas Graves. 2017. ââNews You Donât Believeâ: Audience Perspectives on Fake News.â Reuters Institute. https://reutersinstitute.politics.ox.ac.uk/our-research/news-you-dont-believe-audience-perspectives-fake-news.
Nimmo, Ben, Graham Brookie, and Kanishk Karan. 2018. â#TrollTracker: Twitter Troll Farm Archives – Part Two.â Disinfo Portal(blog). October 17, 2018. https://disinfoportal.org/trolltracker-twitter-troll-farm-archives-part-two/.
Phillips, Whitney. 2011. âLOLing at Tragedy: Facebook Trolls, Memorial Pages and Resistance to Grief Online.â First Monday 16 (12). https://doi.org/10.5210/fm.v16i12.3168.
âââ. 2012. âThis Is Why We Canât Have Nice Things: The Origins, Evolution and Cultural Embeddedness of Online Trolling.â PhD diss., University of Oregon. Proquest LLC. https://search.proquest.com/openview/b2a7a4c485c5d4afef6965aca6d2362c/1?pq-origsite=gscholar&cbl=18750&diss=y.
Quandt, Thorsten, Lena Frischlich, Svenja Boberg, and Tim SchattoâEckrodt. 2019. âFake News.â In The International Encyclopedia of Journalism Studies, 1â6. John Wiley & Sons, Inc. https://doi.org/10.1002/9781118841570.iejs0128.
Redford, Patrick. 2019. âCubs Fan Using âOKâ Hand Gesture Behind Doug Glanville Gets Banned Indefinitely.â Deadspin(blog). May 8, 2019. https://deadspin.com/cubs-fan-using-ok-hand-gesture-behind-doug-glanville-1834615036.
Skyrms, Brian. 2010. Signals: Evolution, Learning, and Information. Oxford University Press. https://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199580828.001.0001/acprof-9780199580828.
SĂže, Sille Obelitz. 2018. âAlgorithmic Detection of Misinformation and Disinformation: Gricean Perspectives.â Journal of Documentation74 (2): 309â32. https://doi.org/10.1108/JD-05-2017-0075.
Sullivan, Margaret. 2017. âPerspective | Itâs Time to Retire the Tainted Term âFake News.ââ Washington Post, January 8, 2017. https://www.washingtonpost.com/lifestyle/style/its-time-to-retire-the-tainted-term-fake-news/2017/01/06/a5a7516c-d375-11e6-945a-76f69a399dd5_story.html.
Tandoc, Edson C., Zheng Wei Lim, and Richard Ling. 2018. âDefining âFake News.ââ Digital Journalism6 (2): 137â53. https://doi.org/10.1080/21670811.2017.1360143.
Venturini, Tommaso. 2019. âFrom Fake to Junk News.â In Data Politics: Worlds, Subjects, Rights, edited by Didier Bigo, Engin Isin, and Evelyn Ruppert. Routledge.
Wardle, Claire, and Hossein Derakhshan. 2017. âInformation Disorder: Toward an Interdisciplinary Framework for Research and Policy Making.â 162317GBR. Council of Europe. https://edoc.coe.int/en/media/7495-information-disorder-toward-an-interdisciplinary-framework-for-research-and-policy-making.html.
Woolley, Samuel C., and Philip N. Howard. 2018. Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. Oxford University Press.
Zuckerman, Ethan. 2017. âStop Saying âFake Newsâ. Itâs Not Helping.â … My Heartâs in Accra(blog). January 30, 2017. http://www.ethanzuckerman.com/blog/2017/01/30/stop-saying-fake-news-its-not-helping/.