The tropes and metaphors of disinformation

It is a good time to be in the disinformation and misinformation business. Whether a commentator or company, publication or propagandist, all have enjoyed the tidal wave of resources that have poured into all things online media manipulation in the four years since 2016. As the work of MediaWell attests, simply keeping track of the emerging developments in the space—and rooting it in the established literature across many fields—is a Sisyphean effort.

The potent blend of public attention, funding, and rapid change has left its mark on the discourse around these topics. The rush of activity has led at times to a haphazard process of seizing on various metaphors for conceptualizing and understanding the problems of disinformation and misinformation. Jargon and tropes abound: one hears that the “post-fact society” is suffering from “truth decay,” brought on by “information warfare” enhanced with “computational propaganda.”

These tropes are important because they seep into every element of the ensuing discussion. How we tell the story of online disinformation informs the arguments, frames the research, and shapes the responses. It is critical to reflect on these tropes, and be clear about where they are helpful and where they may mislead the discussion.

This article is about a particularly ubiquitous trope: the conflict narrative. By this I mean the notion that battle—in a military sense—is the right way to think about our stance toward the problems of disinformation and misinformation. This article explores this narrative, its weaknesses, and what alternatives might provide a better path forward.

The conflict narrative

The language of combat is ubiquitous in discussions around online disinformation and misinformation. “Here’s How You Can Fight Back Against Disinformation,” reads one headline. “We’re in the Middle of a Global Information War,” declares another. The European Union announced an initiative to counter Covid-19-related disinformation, described as a “fight against disinformation [that involves] all European institutions.” The Brookings Institution published a paper entitled simply “How to Combat Fake News and Disinformation.” Russian interference tactics are considered examples of next-generation “information warfare” or “psychological warfare.”

War and conflict also color how responses to these problems are framed. “To fight disinformation, we need to weaponize the truth,” writes one startup CEO working on these problems. Citizen journalists are “the fighters on the frontline.” Commentators argue that it is important to avoid building an antiquated “Digital Maginot Line” and instead look to map “the battleground for the next information war.” Even COGSEC—a conference I help organize on disinformation issues—has declared itself “a conference at the front lines of the (dis)information wars.”

On a certain level, the conflict narrative is a natural way to frame the problems of disinformation. It is commonplace to talk about doing “battle” with a range of other amorphous societal ills: society wages “wars” against crime, poverty, and disease. The conflict narrative evokes the urgency of the issue at hand and lends itself to discussions about how societal resources will be marshaled to deal with a threat. It does not hurt that this narrative makes for stirring rhetoric, as well.

These martial comparisons serve a pragmatic purpose. They can rally citizens and inspire funders to open their pocketbooks. But the trouble is that the conflict narrative is fundamentally unsound. It presents a misleading picture of the situation, and in doing so suggests interventions that may be ineffectual or may even work to exacerbate the problem.

There are three problems with the conflict narrative. First, it erroneously makes disinformation itself the focus of the struggle. Second, it implies a level of control over social persuasion and influence that does not exist in practice. And finally, it implies a state of “peace” which is deeply misleading.

Disinformation as the enemy

The conflict narrative performs a neat intellectual sleight of hand. Disinformation and misinformation are after all not actual “things” that can be defeated or destroyed. As a result, the “war on disinformation” is as ambiguous as the “war on terror” or the “war on drugs.” The specifics are simply left for another day.

This is more than a semantic quibble. Having suited up for war, we end up being uncertain about where—exactly—the battlefield is. Is the fight against disinformation actually a fight against those who spread disinformation or misinformation? Or perhaps against certain channels and tools for spreading these ideas? Is the battle against those who believe falsehoods? Which falsehoods?

This vagueness has made “disinformation” and “misinformation” a big tent, bringing together a multifaceted community of scholarly researchers, technologists, policymakers, journalists, and others who may not have otherwise been in dialogue with one another. At the same time, vagueness has worked to paper over key divergences in opinions and objectives across this community. A speaker typically has quite concrete answers to the questions of “who” and “what” in mind when they talk about the fight against disinformation. But it is a little impolitic (and at times a little dangerous) to come out and simply say that “the goal of all this is to regulate Facebook and Twitter,” or “the problem is that we believe that holding certain political beliefs is illegitimate in a just society,” or simply “the real goal here is to thwart Russian intelligence.”

“Disinformation” is comforting in its vagueness and its pretend objectivity. It can serve as a convenient screen for many different actors to hide the inescapable normative judgments that come with defining an end goal around these issues. Though the conflict narrative suggests that we are ready for aggressive action, it is curiously shy about what we want to go and fight.

To the extent that tropes like the conflict narrative help us frame and understand the disinformation landscape, we should pick ones that allow those honest opinions to be front and center. If the battle against disinformation is, for example, actually a battle against corporate power over the flow of information in society, then thwarting that power should be what is at the forefront of the discussion.

The Napoleons of the information war

The language of military combat implies the existence of command. Armies need generals to marshal their forces. The problems of online disinformation are, in other words, cast as a struggle between a few protagonists.

This trope emerges repeatedly in popular portrayals of online disinformation. The media has tended to spotlight the importance of certain figures or small cadres in the battle of influence over the public mind. The Trump 2020 presidential campaign operates out of an office described as “the Death Star.” The campaign is captained by Brad Parscale, a political “genius” seeking ways to “open a new front” in the information war. Facebook’s “war rooms” and “operations centers” bring together small teams of staffers to monitor social media activity and combat election interference. There is the mysterious General Gerasimov, whose tactical manifestos have allegedly defined how Russia launches online disinformation efforts (they haven’t). Disinformation and misinformation are abstract challenges that beg for a face. It is no surprise that the story frequently leans on the crutch of a small cast of characters that help to dramatize the situation.

But this notion of protagonists and antagonists squaring off on the information battlefield may suggest a level of influence that these players do not actually have. Evidence that the Russian interference campaign (or the internet in general) materially influenced the outcome of the 2016 US presidential election is mixed at best. Experiments on the much-touted influence of “micro-targeting” in online advertising do not show that these tools can produce significant changes in behavior. Claims that online disinformation campaigns have wrought an “information disorder” where the public does not trust institutions are undermined by the fact that trust in many institutions has been declining for decades, even before the significant use of these methods.

Whether self-appointed or media-anointed, the “generals” of information warfare have strong incentives to play up this aspect of the conflict narrative. Interests of all kinds are seeking gurus that have special abilities in shaping public opinion. While perpetrators of disinformation campaigns and those defending against disinformation campaigns are often at odds, they have a mutual interest in perpetuating a narrative in which they are the linchpins in determining the outcome of persuasive combat.

But this view erases perhaps the most important part of any persuasive effort: the audience itself. The public is flattened conceptually into a set of passive chess pieces, subject to the whims of sinister masterminds or truth-bearing “soldiers” that fight over them. This denies the volition and collective power that the public really has over these issues.

Faith in the potency of a few individuals in determining the spread of disinformation and misinformation is not just inaccurate, but leads to prescriptions that may ultimately be ineffectual. The tendency is to believe that “if only” a few key individuals were effectively checked, the threat posed by online disinformation and misinformation would be largely resolved. This may be out of proportion with the influence that these actors actually wield over whether or not misinformation and disinformation spread.

Peace in our time?

The framing of war also suggests its opposite: peace. Implicitly, conflict narratives suggest both a nostalgia for the past, before the onset of conflict, and an aspirational future when conflict will no longer be needed. Both are problematic ways of thinking about the issue.

It is unclear if the concept of some halcyon preconflict “peacetime” is actually sensible here. The data to support the idea that we live in uniquely misinformed times is quite sketchy. The historical record makes it clear that well-funded campaigns of disinformation are nothing new. The recent wave of public attention and funding toward these issues after 2016 should be seen as reflecting near-term political anxieties, and not a fundamental shift in the truth or the public’s relationship to it.

Yet commentators frame the social media era as “post-truth” or “post-fact,” as if there were a “truth-based” or “fact-based” moment that society might return to. The fact is that it makes little sense to attempt to “turn back the clock” or wax nostalgic about a time before disinformation. Interventions that attempt to do so risk chasing after a world that never truly existed, and might have little impact on contemporary problems of disinformation and misinformation.

The existence of an “information war” also suggests the possibility of a victory. But what would such a victory look like? Presumably it would not be a world without lies, or even a world without coordinated efforts at persuasion. Presumably it would not be the signing of an armistice or the surrender of an enemy army that would mark the end of the conflict. There will always be more false narratives and more perpetrators of false narratives that one might choose to fight.

In this sense, to “combat” disinformation uncritically is to accept a state of endless warfare. Like other endless wars, we might be better served by thinking about the assumptions that lead to these types of conflict in the first place, and deciding whether or not committing to those assumptions is worth it.

A better lens

The conflict narrative misleads. It lends itself to vagueness about what is being fought, overemphasizes the influence of individual figures, and suggests an end goal that is impossible to achieve. The narrative produces inaccurate assessments of the problem at hand and suggests interventions that may make little real difference in the spread of disinformation and misinformation.

What might be better than the conflict narrative? What should replace it?

It is worth giving credit where credit is due: the conflict narrative does have its good aspects. It draws attention to the tangible, destructive qualities of disinformation and misinformation. The evocation of warfare connotes some of the seriousness of being targeted by a campaign of online harassment and the very real health consequences of medical misinformation.

In the words of George Box, “All models are wrong, but some are useful.” The objective should not be to identify a metaphor that will be perfect in all respects, or that will be mutually exclusive to all others. Instead, the important thing is that a lens opens up new approaches to the problem that may be more robust over the long term.

Informational climate

One alternative might be to draw on environmental science as an inspiration for thinking about the problems of disinformation and misinformation. In this approach, the circulation of information online is akin to the climate. This captures two aspects of disinformation and misinformation often missing from the conflict narrative.

For one, a climate frame captures the individual experience of disinformation and misinformation better. Truth, falsehood, and the use of persuasive tactics both overt and covert are continually present in different degrees, particularly online. Most people are not engaging in a conscious struggle against disinformation and misinformation so much as simply experiencing it as part of their lives. There is not a “front line” of struggle so much as a hot or cold front passing through a region.

Second, a climate approach also nicely reframes disinformation and misinformation without losing sight of human responsibility. Disinformation cannot be “fought” any more than the climate can be “fought.” But this does not mean that no one is at fault for the long-term trends in disinformation or climate change. The struggle is not so much against disinformation and misinformation per se but against the forces that are the root cause of these problems. This frame begs the question of who is responsible for the problem. It eliminates the “fight against disinformation” as a hazy euphemism that people use when they hesitate to name particular actors that they believe are at fault. It also highlights the degree to which externalities and collective-action problems can contribute to systemic problems.

Perhaps the most important contribution a climate paradigm might make to thinking about the problems of “information disorder” is to change the nature of our responses to these issues. The conflict narrative suggests a future moment of triumph, when the perpetrators of disinformation and the spread of misinformation are finally vanquished—the climate frame strikes the notion of “victory” from the agenda.

Pollution in the atmosphere is never “defeated” in some absolute sense, it is simply managed. This aligns more with the reality of disinformation and misinformation. Adopting an environmental frame would help to underscore the limitations of what interventions can really achieve, and the need to build sustainable approaches that can shape the long-term informational “climate.”

What do those sustainable approaches look like? Building resilience is one path. Weather—the short-term state of the climate—can be unpredictable, and the signals that indicate that it may make a turn for the worse can be unreliable. We know that hurricanes occur every year, but we never know where they might strike. Establishing effective emergency management institutions and plans becomes essential. The capacity of a society to effectively absorb threats from the environment becomes an important part of the conversation.

Resilience may be a more robust approach over the long haul. Disinformation and misinformation are cheap to create and distribute, and will become increasingly so. It will be impossible to defeat all perpetrators of disinformation, or successfully anticipate where all the threats might emerge from. Communities must anticipate that the worst will happen and develop processes to nimbly manage the harm. We should spend more time thinking about how to organize the “first responders” to disinformation and misinformation disasters, rather than how to recruit “soldiers” to fight at the front.

Resilience does not place all the burden on individuals and communities. Thinking about the climate as a paradigm for disinformation and misinformation also refocuses discussion on the importance of infrastructure. Resilience does not just depend on the ability of communities to withstand a storm or other natural disaster. Ensuring that the levees hold up when they are needed is equally important.

The climate narrative deemphasizes defeating the villains of a particular moment in favor of confronting the online platforms that more deeply shape information flow. It also focuses on the investment (or lack thereof) that online platforms put into maintenance over time, rather than the actions they take to deal with a specific high-profile moment.

Achieving metaphorical change

There is a reason that the conflict narrative has proven to be such a persistent and popular way of framing the problems of disinformation and misinformation. It creates drama, forces individuals and organizations to pick sides, and makes clear the urgency of the issue.

But these are the wrong reasons to adopt the conflict narrative. Conflict narratives fail to capture many important aspects of disinformation and misinformation, and tend to focus on interventions that may do little to deal with these threats over the long term. We should always be looking for alternatives that enable us to think beyond the confines of our existing frame.

A climate narrative presents a path forward. It captures the individual experience of disinformation and misinformation better and more accurately characterizes responsibility around these problems. It also points us toward sustainable approaches that focus on building societal resilience, and underscores the need to shape our infrastructures for the better.

Regardless of whether a climatological approach comes to supersede the conflict narrative, it seems likely that approaching the problems of information disorder as a “battle” will run its course. “Victories” scored in thwarting a given disinformation effort or fact-checking a piece of misinformation into oblivion will not make enduring changes in society’s relationship with falsehood and rumor. The moment of triumph promised by thinking about these issues as a “battle” will fail to materialize. While the conflict narrative may have galvanized action, it will ultimately be critical to find a paradigm that can guide our energies in the next round of research and action.