Social Science Research Council Research AMP Just Tech
Research Review

Toolkit for Trust: Strategies for Better Online Communication

It can be hard to know how to engage with others on difficult topics in the current digital landscape. While social media platforms are built for simple exchanges, some subjects – especially those that involve inaccurate, complex, or contentious information, like vaccines or elections – require more nuance. Finding the right approach to discuss these issues can be draining, and doubly so if they backfire.

But having these conversations is more important than ever. To help navigate them, the Analysis and Response Toolkit for Trust (ARTT) project, now part of Discourse Labs, has developed a framework of evidence-based, goal-oriented interventions, sourced from researchers in public health, psychology, conflict resolution, media literacy, and other fields. 

The goal of the ARTT project is to promote trust-building conversation and empower local communicators. Its findings are especially relevant for those in public health, education, politics, and civil society, but it can be applied by anyone who seeks to keep their local or digital community – however large or small – informed with reliable resources. 

The ARTT framework is made up of four groups and ten guided response modes, including: 

  1. Understand: Listen, empathize, and take perspective; 
  2. Inform: Correct, co-verify, and encourage healthy inquiry; 
  3. Connect: Share, invite, and de-escalate; and 
  4. Also consider: Do not respond.

The following review provides a summary of these guided responses, as well as what existing research recommends about when and how to apply them in difficult online conversations. It takes a condensed snapshot of the ARTT Catalog, a living, collaborative database of peer-reviewed insights broadly related to the ARTT mission of healthy online communication; this research was compiled by SSRC in support of this project. The Catalog is the basis for the ARTT Guide, a forthcoming Web-based software assistant to help identify specific recommendations, and the Curriculum, which seeks to offer practical support for local election officials. 

1. Understand: Listen, Empathize, Take Perspective

To understand is a response type that aims to comprehend and consider the other person. This is an acceptance of peoples’ emotions – by listening, empathizing, and taking perspective –  even when they are contrary to fact, as a starting point for further discussion. 

1.1. Listen

Listening may not seem like much of a response – which is exactly what makes it an important place to begin. In polarized environments, there can be a tendency to assume what another person’s thoughts, feelings, or concerns are about a particular issue, which may or may not be accurate; similarly, both sides often default to waiting for their turn to respond, thinking of counter-arguments while the other side is still speaking (Itzchakov et al. 2023).  

Listening helps reframe the conversation from an argument to “win” to an opportunity to engage. Current research emphasizes listening as a critical part of trust-building exchanges (Santoro & Broockman 2022), especially in terms of fostering possibilities for longer term dialogue (Lederach 2003; Bojer et al. 2006; Dorjee & Ting-Toomey 2020) or engagement outside the immediate topic of discussion (Yeomans et al. 2020; Hartman et al. 2022)

“High quality” listening that is is attentive, understanding, and well-intentioned has demonstrated strengths in terms of conflict resolution (Itzchakov et al. 2017; Itzchakov et al. 2020b; Collins 2022; Itzchakov et al. 2023) and affective polarization (Santoro & Broockman 2022). Because it centers empathy and non-judgement, it reduces speakers’ social anxiety and feelings of defensiveness (Itzchakov et al. 2017). High-quality listening also decreases perceived polarization and attitude extremity by increasing positivity resonance (like “shar[ed] positive affect” and “mutual care and concern”) and self-insight (by “motivat[ing] exploration of both the issue at hand and oneself, including one’s possible biases concerning the topic”) (Itzchakov et al. 2023, 3). It can lead to a domino effect of meaningful exchange – according to Kluger et al. (2021), respondents who feel listened to are more open to listening in return. 

These effects can be maximized in a few ways. Listening training can help listeners feel less anxious and more likely to take perspective over the course of the conversation (Itzchakov 2020a).  Journalists or other communicators with a large audience have the ability to improve societal listening by reducing the partisan cues that make people less willing to engage with the other side (Arendt et al. 2023). For highly contentious on- or offline community debates, participatory interventions like horizontal listening – or active, opening listening between fellow citizens in the public sphere – can make vertical exchanges (with politicians or corporations) more effective (Hendriks et al. 2019)

These benefits rely on 1) letting the other person know you’re listening, and 2) making them feel genuinely heard. That effect can be difficult to achieve in online spaces, and the stakes of perceived poor listening are high. Portland’s Restorative Listening Project (RLP) seeks to use dialogue as a strategy for community formation; in one early study, the project backfired when white participants had obvious difficulty listening to Black participants’ concerns, resulting in worse intergroup relations than before (Drew 2012; for more on safe spaces for expression and listening among marginalized communities, see Mansbridge & Latura (2016)). 

But genuine listening can occur, even virtually. “Active listening” involves restating or paraphrasing the speaker’s – or, in this case, user’s – words to confirm what was heard and what was meant, as a way to convey that sense of attention (Rogers & Farson 1957). Collins (2022) emphasizes the value of these explicit confirmations. For in-person communication, implicit cues like nodding or making eye contact can give the impression of listening without any real cognitive engagement (ibid). This approach has demonstrated effectiveness online: in a study of posts written by Wikipedia editors, Yeomans et al. (2020) explore how such “explicit acknowledgements of understanding” are part of the “receptiveness recipe” that forestalls conflict escalation and reduces personal attacks online (p. 140). 

1.2. Empathize

Empathy is the ability to recognize, understand, and share the thoughts and feelings of another – to identify with someone else on an emotional level. Existing research describes the affective (i.e., the ability to experience the emotions of others, as described above) and cognitive (i.e., the ability to adopt the perspectives of others) components of empathy. For the sake of clarity, this framework separates the latter into its own response mode (see 1.3. (Take Perspective), below; for further disambiguation of “empathy” as a concept, see Klimecki (2019) and Batson & Ahmad (2009)). 

Empathizing is a useful strategy for exchanges 1) about emotionally-driven conflict, 2) where the goal is to improve the relationship between involved parties. It often applies to topics like vaccine hesitancy and medical mistrust (Gesser-Edelsburg et al. 2018; Gagneur 2020), partisan affective polarization (Saveski et al. 2021; Garrett et al. 2014; Wojcieszak & Warner 2020), hate speech (Hangartner et al. 2021), intergroup contact (Johnston & Glasford 2018), and attitudes towards immigration and racial or ethnic out-groups (Klimecki et al. 2020; Sirin et al. 2016), which often involve feelings of fear, anger, anxiety, or resentment. 

Empathy can be more productive than trying to push through fact or reason alone; in the case of online hate speech, Hangartner et al. (2021) find that empathy-based counterspeech messages increase the retrospective deletion of xenophobic posts and reduce the prospective creation of new posts, even after a four-week follow-up period. By comparison, treatments based on warning of consequences had no consistent effect. Similarly, Gesser-Edelsburg et al. (2018) and Gagneur (2020) show how engaging with emotions like fear and concern for loved ones while addressing misinformation is more effective than fact-based correction alone.

There are important limitations to note. Just as much as empathy can “motivate prosocial behavior,” it can also “motivate cruelty and aggression,” or “lead to burnout and exhaustion” by taking an emotional toll (Bloom 2017). Negative intergroup interactions may further “exacerbate… hostility by enhancing anxiety and reducing empathy” (Wojcieszak & Warner 2020, 789). For recommendations on addressing antisocial reactions to empathy, see 3.1 (De-Escalate), 3.2 (Invite Sociability), and 4.0 (Also Consider/Do Not Respond)

1.3. Take Perspective

Empathize and Take Perspective are related strategies, but they differ in terms of focus. As noted above, while empathizing involves sharing and understanding others’ emotions, perspective-taking helps identify their intentions, needs, reactions, and behaviors. It is the act of putting yourself in someone else’s shoes and trying to view the situation the way they do, even if you do not agree with them. 

This sense of perspective has a wealth of benefits for in-group/out-group dynamics and democratic societies. It reduces exclusionary attitudes (Kalla & Broockman 2021), negative stereotypes and biases (Galinsky & Moskowitz 2000), and intergroup anxiety (Aberson & Haag 2007), while increasing positive attitudes and stereotypes, depolarization, and the likelihood of positive future interactions (ibid; Marchal 2022). Adida et al. (2018) show that a short, interactive perspective-taking exercise – asking subjects to imagine themselves in the shoes of a refugee – increases support for refugees in the United States on a bipartisan level. These exercises can also help maintain a healthy deliberative democracy: Muradova (2021) argues that “seeing the other side” results in deeper, more reflective citizen deliberations. 

Importantly, this response mode only works if it is well-informed (Eyal et al. Epley 2018). Simply imagining someone’s perspective may not be accurate. “Perspective-getting,” or perspective-taking combined with 1) good interpersonal communication and 2) listening to how the other person describes their situation, is needed to achieve these positive benefits (ibid). Longmire & Harrison (2018) further emphasize that whether this approach is appropriate depends on the goal and underlying dynamics of the exchange. Where empathy-based interventions have a “consistently stronger impact than perspective-taking on strengthening social bonds,” the latter “allows actors to attain resource-focused goals in strategic interactions” (ibid, 908). That understanding of social vs. material concerns can help avoid making the situation worse: in the case of Klimecki et al. Sander (2020), perspective-taking exercises made anti-immigration opponents less willing to engage with their pro-immigration counterparts, due to how it increased perceived resource competition (see Johnston & Glasford (2018), Wojcieszak & Warner (2020), and Igartua et al. (2019) for situational drivers of intergroup contact and realistic conflict theory).

2. Inform: Correct, Co-Verify, Encourage Healthy Inquiry

As a response group, informing focuses on evaluating the quality of information. It seeks to provide or explain complex information, complicated knowledge, empirical information, evidence-based knowledge/information, and trustworthy competent knowledge – on one’s own, or in partnership with others. This group encompasses modes of correction, co-verification, and encouraging healthy inquiry

2.1. Correct

To correct someone is to show or tell them what is wrong, and instead explain what is accurate. The goal of this response mode can vary: it might aim to equip the speaker with skills to identify misinformation, or provide facts about important issues like vaccination or climate change. It can also make sure that others listening in on the conversation have access to accurate information. 

This response mode can be especially difficult to navigate. People tend to “reject information corrections that contradict their attitudes [and] share content that is consistent with their own narratives,” without verifying its accuracy (Gesser-Edelsburg et al. 2018). Questions of who, when, where, how, and what are determining factors of what makes effective – or ineffective – correction. 

Existing research suggests that anyone can correct misinformation as long as they either: 

  1. Are perceived as a credible expert on the subject (i.e., healthcare providers or public health organizations for medical information) (Cook et al. 2017; Vraga & Bode 2017),  or 
  2. Cite sources that are perceived as credible. For example, Seo et al. (2022) determine that individuals and organizations alike can effectively correct health misinformation “as long as the correction contains a reliable source” (896). 

This literature supports a “when you see something, say something” approach to non-expert correction. 

Tully et al. (2020) find that social media users can effectively push back against and halt the spread of misinformation online, but they are unlikely to do so – preferring not to engage, and therefore allowing false information to continue to spread. Notably, the chances of user-initiated correction were increased by the presence of other corrections that were civil in tone (ibid). Like listening (1.1), one well-done correction can trigger a domino effect of accurate information. 

Corrections should be able to account for why people believe false information in the first place (see 1.2 (Empathize) and 1.3 (Take Perspective), above). Where framing false claims as “myths” can come across as dismissive (and result in “backfire effect”), Gesser-Edelsburg et al. (2018) point to corrections based on “integrative decision-making,” transparency, and empathy. Bautista et al. (2021) illustrate a careful multi-step approach used by healthcare professionals: false information is identified through internal and external authentication, and corrected by preparation (broken down into “reflect,” “reveal,” “relate,” and “respect”) and dissemination (public and private priming and rebuttal). 

This distinction between public and private is tricky, especially online. As noted above (Tully et al. 2020), public interventions encourage others to correct misinformation, which can slow or even stop the resharing of false content. But they can also put the receiver on the defensive. Mosleh et al. (2021) show how publicly replying with fact-checked reports increases the toxicity of language used on social media. Such cases may require more one-on-one communication, like through direct messaging. 

The timing of corrections – before (also known as prebunking, inoculation, or pre-emptive correction (Cook et al. 2017; Zerback et al. 2020)), during (Lee et al. 2023), or after exposure to false information – is the subject of ongoing research. Pre-emptive and mid-exposure corrections are key to stopping the spread of false narratives, while some experimental research suggests that debunking has comparatively higher longevity in terms of remembering and retaining corrected information (Brashier et al. 2021). For social media platforms, pre- and mid-exposure corrections often take the form of fact-checking labels. For individual communicators, that may look more like amplifying accurate information, intervening when they see false narratives being shared, or ensuring access to reliable sources in their communities (see 2.2 (Co-Verify) and 2.3 (Encourage Healthy Inquiry), below). 

While we often think about corrections in terms of topics or objects (i.e., vaccines or climate change), corrections about people can be especially important for a healthy, peaceful democratic society. The concept of “meta-perceptions” refers to what you believe about how others view you

In a divided political context, people tend to have negative and inaccurate partisan meta-perceptions – because they are more likely to believe believe that members of the opposing political party views them with hostility, dehumanization, and support violence against them, they feel more justified holding the same attitudes in return (Lees & Cikara 2020; Mernyk et al. 2022; Landry et al. 2022). The resulting cycle facilitates harm and increases support for undemocratic practices. Correcting these metaperceptions by informing people what out-groups actually think of them (i.e., with survey data) is essential to slowing or reversing this cycle (ibid). Druckman (2023) and Druckman et al. (2023) examine how these out-partisan corrections work for ordinary citizens and legislators in real-world settings, especially in the presence of competing information. 

2.2. Co-Verify

Co-verification involves fact-checking and evaluating sources with the help of a relevant expert – or, as an actionable step, when a trusted community member offers to verify a piece of information with another person (Murthy 2021). It acts as the “phone a friend” of response modes; where correction is about sorting the accurate from the inaccurate, co-verification is the process of assessing accuracy, with the benefit of trust, credibility, and interpersonal dynamics.  This combination of fact-checking with trusted social connections increases the acceptance of corrected information (Walter et al. 2020), even across different political ideologies (van der Linden et al. 2018). 

In practice, co-verification builds onto existing “media, science, digital, data, and health literacy” programs (Murthy 2021). It might involve a demonstration of “click restraint” (the need to look beyond the first results suggested by a search engine (Panizza et al. 2022)) as well as “lateral reading” (or “leaving the original content to investigate other information sources” – for example, opening a new tab to double-check the claims on a social media post (Axelsson et al. 2021)). Co-verification has been proven to be an effective approach to in-school digital literacy curricula for middle school to college-aged students. Those who learn literacy tools from (or with) a teacher, professor, or their peers have an improved ability to assess credible sources and detect bias, and show greater nuance and critical judgement when interacting with media (Kohnen et al. 2020; McGrew 2020; Axelsson et al. 2021; Breakstone et al. 2021; Brodsky et al. 2022).

2.3. Encourage Healthy Inquiry 

This response mode is about inspiring and enabling critical thinking. To encourage healthy inquiry is to help others know how to question the information they are reading – for example, “What do other sources say?” or “What evidence does it present?” (National Academies of Sciences, Engineering, and Medicine 2017; Murthy 2021).  “Healthy” inquiry is an important keyword here because the goal is not to encourage skepticism in all information. Rather, it focuses on being able to critically evaluate information by not immediately believing new claims.

Encouraging healthy inquiry can involve some of the same information literacy interventions as co-verification (i.e., lateral reading; see Wineburg & McGrew 2017, Brodsky et al. 2021), with an important distinction. Instead of assessing information with someone, this mode seeks to provide the toolkit, opportunity, and motivation to make an assessment on their own. A “healthy inquiry”-based approach to lateral reading would advise the audience to consider context clues (i.e., information about the organization or individual who posted it) in addition to direct in-text claims (Brodsky et al. 2022)

Current research suggests that such active, self-led engagement – finding and evaluating information on one’s own – can be more effective than having it provided to you (see 2.1 (Correct)). Banerjee & Greene (2006) show that students tasked with creating their own anti-smoking ads, which involves locating and assessing the information to include, results in greater behavioral and attitudinal changes than students who analyze existing ads. 

Encouragement can come from organizations or individuals, and it can take place over the short- or long-term. Case-by-case “nudges” that remind users to verify the accuracy and credibility of online content have been shown to improve headline truth discernment and reduce the sharing of misinformation (Pennycook et al. 2020, Jahanbakhsh et al. 2021), and can include accuracy prompts (Pennycook & Rand 2022), and warning labels about the credibility rating (Aslett et al. 2022) or political affiliation of the source (see Nassetta & Gross (2020) for how these labels apply to content posted by state-controlled media on digital video platforms). Newsroom-style games can also encourage inquiry by helping users interrogate information sources and apply those skills going forward (Vicol 2020). Long-term intervention can include a variety of digital, informational, and media education training programs (Bergsma & Carney 2008; Jones-Jang et al. 2019; Seo et al. 2019). Jeong et al. (2012) note that multi-session training programs are more effective at guiding healthy inquiry over time as compared to single-session programs. These programs can also be tailored to the needs and concerns of different populations see Seo et al. (2019) for digital literacy training programs in older Black communities, and Kenzig & Mumford (2022) for targeted reminders encouraging skepticism among the vaccine-hesitant). 

This response mode is an especially broad part of the ARTT framework because it involves skills that can – and should – apply beyond any one topic. McGrew & Breakstone (2022) make the case that “given the current threat posed by toxic digital content… evaluat[ing] online sources cannot be relegated to a single subject area,” and that for students, must be directly embedded “across the curricula.” As part of their study, ninth grade biology and world geography teachers wove principles of civic online reasoning and digital literacy into existing lesson plans. Students demonstrated a significant increase in their ability to evaluate online content. 

Encouraging healthy inquiry can also be a way to ease the mental burden on those charged with correcting misinformation. Over the course of the COVID-19 pandemic, local health departments (LHD) in the midwestern United States often faced conflict, public rebuttals, and harassment from trolls when they attempted to directly intervene in false claims on Facebook (Ittefaq 2023). Less direct engagement – like giving people the tools to interrogate information through FAQs – reduced the opportunity for conflict while still promoting accurate information spread (ibid)

3. Connect: De-Escalate, Invite Sociability, Share

This response group involves tactfully joining a conversation with the goal of reducing tensions and strengthening (or restoring) a sense of human connection to difficult exchanges. It includes guided responses for de-escalation, inviting sociability, and sharing personal stories or narratives.

3.1. De-Escalate

De-escalation focuses on reducing hostilities between individuals or groups. It can be especially useful for getting a conversation back to a place where it can be productive, or for removing barriers so that conversation can begin.

Common de-escalation strategies include appeals to common values (also known as “moral suasion”) (Munger 2020), a superordinate identity (i.e., “Americans, not partisans”) (quoted in Levendusky 2018; see also Voelkel et al. 2023), or common interests and goals (Dorjee & Ting-Toomey 2020; Rajadesingan et al. 2021). De-escalation may also include humanizing a conflict through mutual respect (see 3.2 – Invite Sociability), listening (see 1.1 – Listen), empathizing (see 1.2 – Empathize), positive intergroup contact (Wojcieszak & Warner 2020; Voelkel et al. 2023), and/or the sharing of personal narratives (see 3.3 – Share). A detailed overview of dialogue-based de-escalation strategies from intergroup conflict research can be found in Dorjee & Ting-Toomey (2020). Key recommendations point to approaches like “middle-way,” “transcendent,” “identity-sensitive,” or “peace-making” dialogue, which involve principles of “mutually beneficial reconciliation,” an open-minded or judgement-free exchange of ideas, the rejection of a “binary win-lose stance,” and an awareness of sociocultural differences or power imbalances (ibid). 

If an interaction is likely to result in conflict, Yeomans et al. (2020) suggest a preventative approach: indicating one’s interest and engagement in hearing what the other person has to say at the beginning of a conversation (what the authors refer to as “early conversational receptiveness”) can help keep the rest of the conversation civil, and prevent escalation before it happens. This “early de-escalation” is especially useful for intractable conflicts, which lock people and communities into “destructive spirals of enmity”  (Kugler & Coleman 2020). Intractability is often the result of clashes over “important moral differences” (ibid; see also Dorjee & Ting-Toomey 2020) like “[the allocation of] critical resources, identity, meaning, justice, [or] power” (Coleman 2003). These kinds of disputes go beyond an “agree to disagree” mindset; left to fester, they may lead to increased support for partisan violence and undemocratic practices (Voelkel et al. 2023; Druckman 2023)

In a similar vein, one’s choice of de-escalation strategy must be informed by why tensions escalated in the first place (Dorjee & Ting-Toomey 2020). Existing research shows that – while fact checks help counter misinformation – they fail to reduce affective polarization, which is driven more by emotion than by fact (Druckman 2023; Boukes & Hameleers 2023). But it also shows how adaptive de-escalation can fill that gap: Munger (2020) finds that messages that appeal to Republican or Democratic values are more effective at reducing partisan incivility on Twitter than messages with no moral content. Similarly, Huddy & Yair (2021) demonstrate that warm or friendly behavior interactions between leaders eased affective polarization, whereas policy compromise did not. 

Adaptive de-escalation is critical for tailoring interventions to online vs. offline spaces. In their analysis of cross-partisan interaction on Reddit, Rajadesingan et al. (2021) discuss how, in offline settings,  “knowing more about out-partisan interlocutors help[s] manage disagreements” by adding an element of context and empathy. The same can backfire in online spaces, where personal information (i.e., other subreddits a user participates in or past comments they have made) can raise “concerns around privacy and misuse of that information for personal attacks[,] especially among women and minority groups” (ibid). The Redditors in their study instead preferred “establishing common ground, complimenting, and remaining dispassionate in their interactions” as a way to de-escalate cross-partisan conversations (ibid; for further analysis of digital de-escalation, see Hangartner et al. (2021) on the use of empathy-based counterspeech to reduce racist hate speech on social media, and Munger (2020), discussed above). 

3.2. Invite Sociability 

Reminders of the ways we’re connected to one another can have a powerful impact on communication. Inviting sociability – or an emphasis of shared norms and values – acts as an opportunity to reflect on the bonds that tie us together, which increases trust, civility, and open-mindedness. 

There are two broad conversational approaches to invite sociability: first, by appeals to “who we are” in the sense of a common, superordinate identity; and second, by appeals to “how we behave,” through reminders of prosocial behavioral norms and expectations. Some of these strategies overlap with the research examined in the above response mode (see 3.1. (De-Escalate)). 

People often use identity as a shortcut to know who to trust and how to communicate. This is especially true for polarized, fast-paced, or overloaded digital information environments, where “trust, not knowledge” may be required to overcome issues like COVID-19 vaccine hesitancy (Ledford et al. 2022; see Seo et al. (2022), Magee (2022), and Heiman et al. (2022) for further analysis of identity-based approaches to vaccine hesitancy in migrant minority communities). In an analysis of Latino and Latina communities on Facebook, Rivera et al. (2022) find that users’ personal relationship with the author of a post leads to greater engagement, information-seeking, discussing, discussing content with others, and changing health behaviors than simple exposure to evidence-based health communication. 

But identity is rarely fixed, and existing research shows the value in purposefully shifting who we perceive as being “like us.” Greenaway et al. (2015) find that people are more likely to follow instructions and have better communication with perceived in-group members than with out-group members. That difference disappears when participants are made aware of a common identity – when the boundaries of group identity were redrawn to include all participants. Further research shows that this framing can more successfully improve intergroup relations than contact and interaction (Martinez-Ebers et al. 2021). This sense of “we” can even bridge deeply entrenched, partisan divides; drawing on large-scale survey and experimental data, Levendusky (2023) examines how priming commonalities like shared identities outside of politics, cross-party friendships, and common issue positions lessens animosity between Republicans and Democrats. 

Prosocial “nudges,” or reminders of behavioral norms and socially-defined best practices, are another important form of inviting sociability (Goldstein et al. 2008; Andi & Akesson 2021; Pennycook et al. 2020). In an experiment of hotel guests, Goldstein et al. (2008) find that people are more likely to change their behavior when given reminders that “the majority of [hotel] guests reuse their towels” compared to “common good”-based reminders about environmental protection or conserving water. These cues are most effective when the norms being described closely match the audience’s environment (in the case of Goldstein et al. 2008, “the majority of guests reuse their towels” is most likely to change behavior in a hotel, not necessarily in someone’s home), or when they are given by a perceived in-group member (Gómez et al. 2013; Munger 2017). Munger (2017) applies this approach to the context of racist online harassment: white male Twitter users lessened their use of anti-Black slurs when sanctioned by an account that appeared to be a high-follower white male. 

3.3. Share

To share is to bring something personal into a discussion – specifically, sharing one’s own story can be an important way that people explain their thought process or personal experience of navigating a difficult decision.

The strength of this response mode is its ability to humanize, which is an important counterbalance to the “faceless” anonymizing effect of social media. Narratives that are based on personal experience, and especially those that involve a sense of vulnerability, can bridge ideological divides, encourage empathy, and make people appear more trustworthy (Hagmann et al. 2020; Bojer et al. 2006). In data-driven information environments, these stories are seen as more authentic, and therefore more durably persuasive, than non-narrative messages (ibid; Fiske & Dupree 2014; Oschatz & Marker 2020). Because stories inherently lead to some degree of identity negotiation and perspective-taking (see 1.3, Take Perspective), Black (2008a, 2008b) argues that storytelling can aid deliberative democracy. 

Recent research points to stories as a way of overcoming partisan affective polarization (Voelkel et al. 2023), delivering health or science information (Haigh & Hardy 2011; Massey et al. 2020; Academies of Sciences, Engineering, and Medicine 2017; Le et al. 2023), and reducing bias between racial or cultural outgroups (Gaertner & Dovidio 2000; Wojcieszak & Kim 2016). While evidence suggests at least some impact from both first- and third-person narratives, Kim & Lim (2022) find that first-person storytelling is comparatively more persuasive because of the way it fosters a sense of “direct interaction with a narrative character.” This dynamic is supported by Igartua et al. (2019), who find that first-person narratives featuring stigmatized immigrants can work as a sort of “imagined contact,” improving outgroup attitudes and leading to actual intergroup contact among prejudiced individuals. Again, such “imagined contact” is uniquely important for digital spaces, where it can be all-too-easy to forget that real people are on the receiving end of unkind or hateful messages. 

As noted above, storytelling tends to be more persuasive than statistics alone, especially for emotionally-driven issue areas (Wojcieszak & Kim 2016; Gaertner & Dovidio 2000). Yet this approach may be less “and/or,” and more “both”: Shelby & Ernst (2013) explore how personal narratives can effectively supplement statistics to deliver vaccine-based health information. 

4. Also consider: Do Not Respond

As a final note, it’s important to remember that sometimes, the best response might be none at all. Blocking and reporting a user is appropriate in cases where there is a clear violation of platform rules, or when continuing the conversation poses any risk of danger to yourself or others. Refusing to engage is also appropriate if a user or message is in bad faith, created by bots, or harmful to your mental health – research indicates that responding to trolls or hostile discussions can exacerbate negative interactions and lead to psychological distress (Buckels et al. 2014; Adams et al. 2006). 

While this response mode may feel unsatisfying (or not enough of a “real” response), it can have important benefits. 

First, refusing to respond can still lead to a net decrease in online incivility, the amplification of harmful content, and the spread of false information. For social media trolls driven by antisocial personality traits or a desire for negative social awards, attention is the point. Any response might only encourage more trolling (Buckels et al. 2014; Craker & March 2016). Constant engagement can also lead to “social media fatigue,” a phenomenon linked to increased misinformation-sharing (Islam et al. 2020)

Second, “deliberate ignorance” may not be bliss (Hertwig & Engel 2016), but it is at least a useful strategy that allows individuals to maintain their well-being. This can be critical for social workers, care professionals, or community leaders, who are at high risk of “compassion fatigue” (Adams et al. 2006). In a broader sense, “social media fatigue” can result in discontinued use of social media (Ou et al. 2023) – and, as a result, fewer people to correct, co-verify, or enact any of the above response modes to promote a healthy and accurate digital information ecosystem. 

Conclusion 

In 2021, the Office of the U.S. Surgeon General urged that “together, we have the power to build a healthier information environment…. But there is much to be done, and each of us has a role to play” (Murthy 2021, 6).

That guidance has only grown more relevant. As major tech companies scale back their content moderation and fact-checking policies for the U.S. market, the responsibility of ensuring accurate information spread falls increasingly to people, not platforms. Communities – and especially educators, journalists, public health officials, mutual aid organizers, and civil society leaders – will decide the next chapter of what our digital spaces look like, and how we communicate within them. 

Further, while social media is rightly blamed for many of the fractured, toxic elements of our social and political landscape, it also provides the opportunity for repair. The ARTT framework is designed to help public communicators seize that opportunity. Each of its guided responses are grounded in an empathetic understanding of where people are coming from and evidence-based solutions of how to ideally engage. There is no easy reset button for the problems of the current digital environment. But – with the right tools – healthier, better-informed online communities are possible, one conversation at a time. 

Acknowledgements 

The Analysis and Response Toolkit for Trust (ARTT) is part of Discourse Labs. a project of Hacks/Hackers and the Paul G. Allen School of Computer Science & Engineering at the University of Washington. Connie Moon Sehat (Hacks/Hackers Researcher at Large) is the Principal Investigator (PI) for the ARTT project. Amy X. Zhang (Assistant Professor at University of Washington’s Allen School) and Franziska Roesner (Associate Professor, Allen School) serve as co-Principal Investigators (co-PIs). 

More about the ARTT project can be found on its website at artt.cs.washington.edu.

This review is written by Kimberly Tower (MediaWell), based on a prior review by Kristin M. Miller (ARTT Research Consultant)’s work with the ARTT Catalog and literature suggestions from external scholars. The prior review was edited by Connie Moon Sehat, Molly Laas (Program Director, Social Science Research Council), and Endalkachew Chala (ARTT Researcher). 

This work was supported by the National Science Foundation’s Convergence Accelerator program under Award No. 49100421C0037.

Bibliography

Aberson, Christopher L., and Sarah C. Haag. 2007. “Contact, Perspective Taking, and Anxiety as Predictors of Stereotype Endorsement, Explicit Attitudes, and Implicit Attitudes.” Group Processes & Intergroup Relations 10 (2): 179–201. https://doi.org/10.1177/1368430207074726.

Adams, Richard E., Joseph A. Boscarino, and Charles R. Figley. 2006. “Compassion Fatigue and Psychological Distress among Social Workers: A Validation Study.” American Journal of Orthopsychiatry 76 (1): 103–8. https://doi.org/10.1037/0002-9432.76.1.103.

Adida, Claire L., Adeline Lo, and Melina R. Platas. 2018. “Perspective Taking Can Promote Short-Term Inclusionary Behavior toward Syrian Refugees.” Proceedings of the National Academy of Sciences 115 (38): 9521–26. https://doi.org/10.1073/pnas.1804002115.

Altay, Sacha, Anne-Sophie Hacquin, and Hugo Mercier. 2022. “Why Do so Few People Share Fake News? It Hurts Their Reputation.” New Media & Society 24 (6): 1303–24. https://doi.org/10.1177/1461444820969893.

Amazeen, Michelle A., and Erik P. Bucy. 2019. “Conferring Resistance to Digital Disinformation: The Inoculating Influence of Procedural News Knowledge.” Journal of Broadcasting & Electronic Media 63 (3): 415–32. https://doi.org/10.1080/08838151.2019.1653101.

Andı, Simge, and Jesper Akesson. 2020. “Nudging Away False News: Evidence from a Social Norms Experiment.” Digital Journalism 9 (1): 106–25. https://doi.org/10.1080/21670811.2020.1847674.

Arendt, Florian, Temple Northup, Michaela Forrai, and Dietram Scheufele. 2023. “Why We Stopped Listening to the Other Side: How Partisan Cues in News Coverage Undermine the Deliberative Foundations of Democracy.” Journal of Communication 73 (5): 413–26. https://doi.org/10.1093/joc/jqad007.

Aslett, Kevin, Andrew M. Guess, Richard Bonneau, Jonathan Nagler, and Joshua A. Tucker. 2022. “News Credibility Labels Have Limited Average Effects on News Diet Quality and Fail to Reduce Misperceptions.” Science Advances 8 (18): eabl3844. https://doi.org/10.1126/sciadv.abl3844.

Axelsson, Carl-Anton Werner, Mona Guath, and Thomas Nygren. 2021. “Learning How to Separate Fake from Real News: Scalable Digital Tutorials Promoting Students’ Civic Online Reasoning.” Future Internet 13 (3): 60. https://doi.org/10.3390/fi13030060.

Badrinathan, Sumitra, and Simon Chauchard. 2024. “‘I Don’t Think That’s True, Bro!’ Social Corrections of Misinformation in India.” The International Journal of Press/Politics 29 (2): 394–416. https://doi.org/10.1177/19401612231158770.

Balietti, Stefano, Lise Getoor, Daniel G. Goldstein, and Duncan J. Watts. 2021. “Reducing Opinion Polarization: Effects of Exposure to Similar People with Differing Political Views.” Proceedings of the National Academy of Sciences 118 (52): e2112552118. https://doi.org/10.1073/pnas.2112552118.

Banerjee, Smita C., and Kathryn Greene. 2006. “Analysis Versus Production: Adolescent Cognitive and Attitudinal Responses to Antismoking Interventions.” Journal of Communication 56 (4): 773–94. https://doi.org/10.1111/j.1460-2466.2006.00319.x.

Batson, C. Daniel, and Nadia Y. Ahmad. 2009. “Using Empathy to Improve Intergroup Attitudes and Relations.” Social Issues and Policy Review 3 (1): 141–77. https://doi.org/10.1111/j.1751-2409.2009.01013.x.

Bautista, John Robert, Yan Zhang, and Jacek Gwizdka. 2021. “Healthcare Professionals’ Acts of Correcting Health Misinformation on Social Media.” International Journal of Medical Informatics 148 (April):104375. https://doi.org/10.1016/j.ijmedinf.2021.104375.

Bavel, Jay J. Van, Katherine Baicker, Paulo S. Boggio, Valerio Capraro, Aleksandra Cichocka, Mina Cikara, Molly J. Crockett, et al. 2020. “Using Social and Behavioural Science to Support COVID-19 Pandemic Response.” Nature Human Behaviour 4 (5): 460–71. https://doi.org/10.1038/s41562-020-0884-z.

Bechler, Christopher J., and Zakary L. Tormala. 2021. “Misdirecting Persuasive Efforts during the COVID-19 Pandemic: The                        Targets People Choose May Not Be the Most Likely to Change.” Journal of the Association for Consumer Research 6 (1): 187–95. https://doi.org/10.1086/711732.

Bergsma, Lynda J., and Mary E. Carney. 2008. “Effectiveness of Health-Promoting Media Literacy Education: A Systematic Review.” Health Education Research 23 (3): 522–42. https://doi.org/10.1093/her/cym084.

Bilewicz, Michal, Marta Witkowska, Silviana Stubig, Marta Beneda, and Roland Imhoff. 2017. “How to Teach about the Holocaust? Psychological Obstacles in Historical Education in Poland and Germany.” In History Education and Conflict Transformation: Social Psychological Theories, History Teaching and Reconciliation, edited by Charis Psaltis, Mario Carretero, and Sabina Čehajić-Clancy, 169–97. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-54681-0_7.

Bizzotto, Nicole, Susanna Morlino, and Peter Johannes Schulz. 2022. “Misinformation in Italian Online Mental Health Communities During the COVID-19 Pandemic: Protocol for a Content Analysis Study.” JMIR Research Protocols 11 (5): e35347. https://doi.org/10.2196/35347.

Black, Laura W. 2008a. “Deliberation, Storytelling, and Dialogic Moments.” Communication Theory 18 (1): 93–116. https://doi.org/10.1111/j.1468-2885.2007.00315.x.

———. 2008b. “Listening to the City: Difference, Identity, and Storytelling in Online Deliberative Groups.” Journal of Deliberative Democracy 5 (1). https://doi.org/10.16997/jdd.76.

Bles, Anne Marthe van der, Sander van der Linden, Alexandra L. J. Freeman, and David J. Spiegelhalter. 2020. “The Effects of Communicating Uncertainty on Public Trust in Facts and Numbers.” Proceedings of the National Academy of Sciences 117 (14): 7672–83. https://doi.org/10.1073/pnas.1913678117.

Bloom, Paul. 2017. “Empathy and Its Discontents.” Trends in Cognitive Sciences 21 (1): 24–31. https://doi.org/10.1016/j.tics.2016.11.004.

Bode, Leticia, and Emily K. Vraga. 2021. “Correction Experiences on Social Media During COVID-19.” Social Media + Society 7 (2): 20563051211008829. https://doi.org/10.1177/20563051211008829.

Bojer, Marianne “Mille,” Heiko Roehl, Marianne Knuth, and Colleen Magner. 2012. Mapping Dialogue: Essential Tools for Social Change. Taos Institute Publications.

Borah, Porismita, and Xizhu Xiao. 2018. “The Importance of ‘Likes’: The Interplay of Message Framing, Source, and Social Endorsement on Credibility Perceptions of Health Information on Facebook.” Journal of Health Communication 23 (4): 399–411. https://doi.org/10.1080/10810730.2018.1455770.

Boukes, Mark, and Michael Hameleers. 2023. “Fighting Lies with Facts or Humor: Comparing the Effectiveness of Satirical and Regular Fact-Checks in Response to Misinformation and Disinformation.” Communication Monographs 90 (1): 69–91. https://doi.org/10.1080/03637751.2022.2097284.

Brashier, Nadia M., Gordon Pennycook, Adam J. Berinsky, and David G. Rand. 2021. “Timing Matters When Correcting Fake News.” Proceedings of the National Academy of Sciences 118 (5): e2020043118. https://doi.org/10.1073/pnas.2020043118.

Breakstone, Joel, Mark Smith, Priscilla Connors, Teresa Ortega, Darby Kerr, and Sam Wineburg. 2021. “Lateral Reading: College Students Learn to Critically Evaluate Internet Sources in an Online Course.” Harvard Kennedy School Misinformation Review, February. https://doi.org/10.37016/mr-2020-56.

Brodsky, Jessica E., Patricia J. Brooks, Donna Scimeca, Ralitsa Todorova, Peter Galati, Michael Batson, Robert Grosso, Michael Matthews, Victor Miller, and Michael Caulfield. 2021. “Improving College Students’ Fact-Checking Strategies through Lateral Reading Instruction in a General Education Civics Course.” Cognitive Research: Principles and Implications 6 (1): 23. https://doi.org/10.1186/s41235-021-00291-4.

Brodsky, Jessica E., Arshia K. Lodhi, Catherine M. Messina, and Patricia J. Brooks. 2022. “Fact-Checking Instruction Strengthens the Association between Attitudes and Use of Lateral Reading Strategies in College Students.” Proceedings of the Annual Meeting of the Cognitive Science Society 44 (44). https://escholarship.org/uc/item/3882t34s.

Cai, Deborah A., and Colleen Tolan. 2020. “Public Shaming and Attacks on Social Media: The Case of White Evangelical Christians.” Negotiation and Conflict Management Research 13 (3): 231–43. https://doi.org/10.1111/ncmr.12188.

Chan, Man-pui Sally, and Dolores Albarracín. 2023. “A Meta-Analysis of Correction Effects in Science-Relevant Misinformation.” Nature Human Behaviour 7 (9): 1514–25. https://doi.org/10.1038/s41562-023-01623-8.

Chan, Man-Pui Sally, Christopher R. Jones, Kathleen Hall Jamieson, and Dolores Albarracín. 2017. “Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation.” Psychological Science 28 (11): 1531–46. https://doi.org/10.1177/0956797617714579.

Charquero-Ballester, Marina, Jessica G Walter, Ida A Nissen, and Anja Bechmann. 2021. “Different Types of COVID-19 Misinformation Have Different Emotional Valence on Twitter.” Big Data & Society 8 (2): 20539517211041279. https://doi.org/10.1177/20539517211041279.

Coleman, Peter T. 2003. “Characteristics of Protracted, Intractable Conflict: Toward the Development of a Metaframework-I.” Peace and Conflict: Journal of Peace Psychology 9 (1): 1–37. https://doi.org/10.1207/S15327949PAC0901_01.

Collins, Hanne K. 2022. “When Listening Is Spoken.” Current Opinion in Psychology 47 (October):101402. https://doi.org/10.1016/j.copsyc.2022.101402.

Cook, John, Stephan Lewandowsky, and Ullrich K. H. Ecker. 2017. “Neutralizing Misinformation through Inoculation: Exposing Misleading Argumentation Techniques Reduces Their Influence.” PLOS ONE 12 (5). https://doi.org/10.1371/journal.pone.0175799.

Cottone, Dina M., and Paul C. McCabe. 2022. “Increases in Preventable Diseases Due to Antivaccination Beliefs: Implications for Schools.” School Psychology 37 (4): 319–29. https://doi.org/10.1037/spq0000504.

Courchesne, Laura, Julia Ilhardt, and Jacob N. Shapiro. 2021. “Review of Social Science Research on the Impact of Countermeasures against Influence Operations.” Harvard Kennedy School Misinformation Review, September. https://doi.org/10.37016/mr-2020-79.

Craker, Naomi, and Evita March. 2016. “The Dark Side of Facebook®: The Dark Tetrad, Negative Social Potency, and Trolling Behaviours.” Personality and Individual Differences 102 (November):79–84. https://doi.org/10.1016/j.paid.2016.06.043.

“Cyberbullying: What Is It and How to Stop It.” 2024. UNICEF. https://www.unicef.org/end-violence/how-to-stop-cyberbullying.

Davis, Mark H., Laura Conklin, Amy Smith, and Carol Luce. 1996. “Effect of Perspective Taking on the Cognitive Representation of Persons: A Merging of Self and Other.” Journal of Personality and Social Psychology 70 (4): 713–26. https://doi.org/10.1037/0022-3514.70.4.713.

Dor, Gal, Ryan Shandler, Miguel Alberto Gomez, and Daphna Canetti. 2024. “Fear over Facts: How Preconceptions Explain Perceptions of Threat Following Cyberattacks.” Journal of Information Technology & Politics 0 (0): 1–16. https://doi.org/10.1080/19331681.2024.2420669.

Dorjee, Tenzin, and Stella Ting-Toomey. 2020. “Understanding Intergroup Conflict Complexity: An Application of the Socioecological Framework and the Integrative Identity Negotiation Theory.” Negotiation and Conflict Management Research 13 (3): 244–62. https://doi.org/10.1111/ncmr.12190.

Drew, Emily M. 2012. “‘Listening through White Ears’: Cross-Racial Dialogues as a Strategy to Address the Racial Effects of Gentrification.” Journal of Urban Affairs, February. https://www.tandfonline.com/doi/abs/10.1111/j.1467-9906.2011.00572.x.

Druckman, James N. 2023. “Correcting Misperceptions of the Other Political Party Does Not Robustly Reduce Support for Undemocratic Practices or Partisan Violence.” Proceedings of the National Academy of Sciences 120 (37): e2308938120. https://doi.org/10.1073/pnas.2308938120.

Druckman, James N., Suji Kang, James Chu, Michael N. Stagnaro, Jan G. Voelkel, Joseph S. Mernyk, Sophia L. Pink, Chrystal Redekopp, David G. Rand, and Robb Willer. 2023. “Correcting Misperceptions of Out-Partisans Decreases American Legislators’ Support for Undemocratic Practices.” Proceedings of the National Academy of Sciences 120 (23): e2301836120. https://doi.org/10.1073/pnas.2301836120.

Ecker, Ullrich K. H., Stephan Lewandowsky, John Cook, Philipp Schmid, Lisa K. Fazio, Nadia Brashier, Panayiota Kendeou, Emily K. Vraga, and Michelle A. Amazeen. 2022. “The Psychological Drivers of Misinformation Belief and Its Resistance to Correction.” Nature Reviews Psychology 1 (1): 13–29. https://doi.org/10.1038/s44159-021-00006-y.

Epstein, Ziv, Adam J. Berinsky, Rocky Cole, Andrew Gully, Gordon Pennycook, and David G. Rand. 2021. “Developing an Accuracy-Prompt Toolkit to Reduce COVID-19 Misinformation Online.” Harvard Kennedy School Misinformation Review, May. https://doi.org/10.37016/mr-2020-71.

Evans, Anthony M., and Joachim I. Krueger. 2011. “Elements of Trust: Risk and Perspective-Taking.” Journal of Experimental Social Psychology 47 (1): 171–77. https://doi.org/10.1016/j.jesp.2010.08.007.

Eveland Jr., William P., Kathryn D. Coduto, Osei Appiah, and Olivia M. Bullock. 2020. “Listening During Political Conversations: Traits and Situations.” Political Communication 37 (5): 656–77. https://doi.org/10.1080/10584609.2020.1736701.

Eyal, Tal, Mary Steffel, and Nicholas Epley. 2018. “Perspective Mistaking: Accurately Understanding the Mind of Another Requires Getting Perspective, Not Taking Perspective.” Journal of Personality and Social Psychology 114 (4): 547–71. https://doi.org/10.1037/pspa0000115.

Fazio, Lisa. 2020. “Pausing to Consider Why a Headline Is True or False Can Help Reduce the Sharing of False News.” Harvard Kennedy School Misinformation Review 1 (2). https://doi.org/10.37016/mr-2020-009.

Feinberg, Matthew, and Robb Willer. 2015. “From Gulf to Bridge: When Do Moral Arguments Facilitate Political Influence?” Personality and Social Psychology Bulletin 41 (12): 1665–81. https://doi.org/10.1177/0146167215607842.

Fiske, Susan T., and Cydney Dupree. 2014. “Gaining Trust as Well as Respect in Communicating to Motivated Audiences about Science Topics.” Proceedings of the National Academy of Sciences 111 (supplement_4): 13593–97. https://doi.org/10.1073/pnas.1317505111.

Flores, Alexandra, Jennifer C. Cole, Stephan Dickert, Kimin Eom, Gabriela M. Jiga-Boy, Tehila Kogut, Riley Loria, et al. 2022. “Politicians Polarize and Experts Depolarize Public Support for COVID-19 Management Policies across Countries.” Proceedings of the National Academy of Sciences 119 (3): e2117543119. https://doi.org/10.1073/pnas.2117543119.

Gaertner, Samuel L., and John F. Dovidio. 2014. Reducing Intergroup Bias: The Common Ingroup Identity Model. New York: Psychology Press. https://doi.org/10.4324/9781315804576.

Gagneur, Arnaud. 2020. “Motivational Interviewing: A Powerful Tool to Address Vaccine Hesitancy.” Canada Communicable Disease Report 46 (04): 93–97. https://doi.org/10.14745/ccdr.v46i04a06.

Galinsky, Adam D., and Gordon B. Moskowitz. 2000. “Perspective-Taking: Decreasing Stereotype Expression, Stereotype Accessibility, and in-Group Favoritism.” Journal of Personality and Social Psychology 78 (4): 708–24. https://doi.org/10.1037/0022-3514.78.4.708.

Garrett, R. Kelly, Shira Dvir Gvirsman, Benjamin K. Johnson, Yariv Tsfati, Rachel Neo, and Aysenur Dal. 2014. “Implications of Pro- and Counterattitudinal Information Exposure for Affective Polarization: Partisan Media Exposure and Affective Polarization.” Human Communication Research 40 (3): 309–32. https://doi.org/10.1111/hcre.12028.

Gesser-Edelsburg, Anat, Alon Diamant, Rana Hijazi, and Gustavo S. Mesch. 2018. “Correcting Misinformation by Health Organizations during Measles Outbreaks: A Controlled Experiment.” PLOS ONE 13 (12): e0209505. https://doi.org/10.1371/journal.pone.0209505.

Gisondi, Michael A., Daniel Chambers, Tatum Minh La, Alexa Ryan, Adyant Shankar, Athena Xue, and Rachel Anne Barber. 2022. “A Stanford Conference on Social Media, Ethics, and COVID-19 Misinformation (INFODEMIC): Qualitative Thematic Analysis.” Journal of Medical Internet Research 24 (2): e35707. https://doi.org/10.2196/35707.

Goldstein, Noah J., Robert B. Cialdini, and Vladas Griskevicius. 2008. “A Room with a Viewpoint: Using Social Norms to Motivate Environmental Conservation in Hotels.” Journal of Consumer Research 35 (3): 472–82. https://doi.org/10.1086/586910.

Gómez, Ángel, John F. Dovidio, Samuel L. Gaertner, Saulo Fernández, and Alexandra Vázquez. 2013. “Responses to Endorsement of Commonality by Ingroup and Outgroup Members: The Roles of Group Representation and Threat.” Personality and Social Psychology Bulletin 39 (4): 419–31. https://doi.org/10.1177/0146167213475366.

Graupensperger, Scott, Devon A. Abdallah, and Christine M. Lee. 2021. “Social Norms and Vaccine Uptake: College Students’ COVID Vaccination Intentions, Attitudes, and Estimated Peer Norms and Comparisons with Influenza Vaccine.” Vaccine 39 (15): 2060–67. https://doi.org/10.1016/j.vaccine.2021.03.018.

Greenaway, Katharine H., Ruth G. Wright, Joanne Willingham, Katherine J. Reynolds, and S. Alexander Haslam. 2015. “Shared Identity Is Key to Effective Communication.” Personality and Social Psychology Bulletin 41 (2): 171–82. https://doi.org/10.1177/0146167214559709.

Hagmann, David, Julia Minson, and Catherine Tinsley. 2020. “Personal Narratives Build Trust Across Ideological Divides.” OSF. https://doi.org/10.31219/osf.io/sw7nz.

Haigh, Carol, and Pip Hardy. 2011. “Tell Me a Story — a Conceptual Exploration of Storytelling in Healthcare Education.” Nurse Education Today 31 (4): 408–11. https://doi.org/10.1016/j.nedt.2010.08.001.

Hangartner, Dominik, Gloria Gennaro, Sary Alasiri, Nicholas Bahrich, Alexandra Bornhoft, Joseph Boucher, Buket Buse Demirci, et al. 2021. “Empathy-Based Counterspeech Can Reduce Racist Hate Speech in a Social Media Field Experiment.” Proceedings of the National Academy of Sciences 118 (50): e2116310118. https://doi.org/10.1073/pnas.2116310118.

Hartman, Rachel, Will Blakey, Jake Womick, Chris Bail, Eli J. Finkel, Hahrie Han, John Sarrouf, et al. 2022. “Interventions to Reduce Partisan Animosity.” Nature Human Behaviour 6 (9): 1194–1205. https://doi.org/10.1038/s41562-022-01442-3.

Heiman, Samantha L., Edward R. Hirt, Calvin Isch, Jessica F. Brinkworth, Lee Cronk, Joe Alcock, Athena Aktipis, and Peter M. Todd. 2023. “Identities as Predictors of Vaccine Hesitancy during the COVID-19 Pandemic.” Journal of Social Issues 79 (2): 556–77. https://doi.org/10.1111/josi.12569.

Hendriks, Carolyn M., Selen A. Ercan, and Sonya Duus. 2019. “Listening in Polarised Controversies: A Study of Listening Practices in the Public Sphere.” Policy Sciences 52 (1): 137–51. https://doi.org/10.1007/s11077-018-9343-3.

Hertwig, Ralph, and Christoph Engel. 2016. “Homo Ignorans: Deliberately Choosing Not to Know.” Perspectives on Psychological Science 11 (3): 359–72. https://doi.org/10.1177/1745691616635594.

Huddy, Leonie, and Omer Yair. 2021. “Reducing Affective Polarization: Warm Group Relations or Policy Compromise?” Political Psychology 42 (2): 291–309. https://doi.org/10.1111/pops.12699.

Igartua, Juan-José, Magdalena Wojcieszak, and Nuri Kim. 2019. “How the Interplay of Imagined Contact and First-Person Narratives Improves Attitudes toward Stigmatized Immigrants: A Conditional Process Model.” European Journal of Social Psychology 49 (2): 385–97. https://doi.org/10.1002/ejsp.2509.

Inoue, Mami, Kanako Shimoura, Momoko Nagai-Tanima, and Tomoki Aoyama. 2022. “The Relationship Between Information Sources, Health Literacy, and COVID-19 Knowledge in the COVID-19 Infodemic: Cross-Sectional Online Study in Japan.” Journal of Medical Internet Research 24 (7): e38332. https://doi.org/10.2196/38332.

Ittefaq, Muhammad. 2023. “‘It Frustrates Me Beyond Words That I Can’t Fix That’: Health Misinformation Correction on Facebook During COVID-19.” Health Communication 39 (12): 2647–57. https://doi.org/10.1080/10410236.2023.2282279.

Itzchakov, Guy. 2020. “Can Listening Training Empower Service Employees? The Mediating Roles of Anxiety and Perspective-Taking.” European Journal of Work and Organizational Psychology 29 (6): 938–52. https://doi.org/10.1080/1359432X.2020.1776701.

Itzchakov, Guy, Avraham N. Kluger, and Dotan R. Castro. 2017. “I Am Aware of My Inconsistencies but Can Tolerate Them: The Effect of High Quality Listening on Speakers’ Attitude Ambivalence.” Personality and Social Psychology Bulletin 43 (1): 105–20. https://doi.org/10.1177/0146167216675339.

Itzchakov, Guy, Netta Weinstein, Mark Leary, Dvori Saluk, and Moty Amar. 2024. “Listening to Understand: The Role of High-Quality Listening on Speakers’ Attitude Depolarization during Disagreements.” Journal of Personality and Social Psychology 126 (2): 213–39. https://doi.org/10.1037/pspa0000366.

Itzchakov, Guy, Netta Weinstein, Nicole Legate, and Moty Amar. 2020. “Can High Quality Listening Predict Lower Speakers’ Prejudiced Attitudes?” Journal of Experimental Social Psychology 91 (November):104022. https://doi.org/10.1016/j.jesp.2020.104022.

Jahanbakhsh, Farnaz, Amy X. Zhang, Adam J. Berinsky, Gordon Pennycook, David G. Rand, and David R. Karger. 2021. “Exploring Lightweight Interventions at Posting Time to Reduce the Sharing of Misinformation on Social Media.” Proc. ACM Hum.-Comput. Interact. 5 (CSCW1): 18:1-18:42. https://doi.org/10.1145/3449092.

Jeong, Se-Hoon, Hyunyi Cho, and Yoori Hwang. 2012. “Media Literacy Interventions: A Meta-Analytic Review.” Journal of Communication 62 (3): 454–72. https://doi.org/10.1111/j.1460-2466.2012.01643.x.

Johnston, Brian M., and Demis E. Glasford. 2018. “Intergroup Contact and Helping: How Quality Contact and Empathy Shape Outgroup Helping.” Group Processes & Intergroup Relations 21 (8): 1185–1201. https://doi.org/10.1177/1368430217711770.

Jones-Jang, S. Mo, Tara Mortensen, and Jingjing Liu. 2021. “Does Media Literacy Help Identification of Fake News? Information Literacy Helps, but Other Literacies Don’t.” American Behavioral Scientist 65 (2): 371–88. https://doi.org/10.1177/0002764219869406.

Kalla, Joshua L., and David E. Broockman. 2023. “Which Narrative Strategies Durably Reduce Prejudice? Evidence from Field and Survey Experiments Supporting the Efficacy of Perspective-Getting.” American Journal of Political Science 67 (1): 185–204. https://doi.org/10.1111/ajps.12657.

Kenzig, Melissa J., and Nadine S. Mumford. 2022. “Theoretical Considerations for Communication Campaigns to Address Vaccine Hesitancy.” Health Promotion Practice 23 (1): 46–50. https://doi.org/10.1177/15248399211050415.

Kim, Nuri, and Cuimin Lim. 2022. “Meeting of Minds: Narratives as a Tool to Reduce Prejudice toward Stigmatized Group Members.” Group Processes & Intergroup Relations 25 (6): 1478–95. https://doi.org/10.1177/13684302211012783.

Klimecki, Olga M. 2019. “The Role of Empathy and Compassion in Conflict Resolution.” Emotion Review 11 (4): 310–25. https://doi.org/10.1177/1754073919838609.

Klimecki, Olga M., Matthieu Vétois, and David Sander. 2020. “The Impact of Empathy and Perspective-Taking Instructions on Proponents and Opponents of Immigration.” Humanities and Social Sciences Communications 7 (1): 1–12. https://doi.org/10.1057/s41599-020-00581-0.

Kluger, Avraham N., Thomas E. Malloy, Sarit Pery, Guy Itzchakov, Dotan R. Castro, Liora Lipetz, Yaron Sela, et al. 2021. “Dyadic Listening in Teams: Social Relations Model.” Applied Psychology 70 (3): 1045–99. https://doi.org/10.1111/apps.12263.

Kohnen, Angela, Gillian Mertens, and Shelby Boehm. 2020. “Can Middle Schoolers Learn to Read the Web like Experts? Possibilities and Limits of a Strategy-Based Intervention.” Journal of Media Literacy Education 12 (2): 64–79. https://doi.org/10.23860/JMLE-2020-12-2-6.

König, Laura M. 2023. “Debunking Nutrition Myths: An Experimental Test of the ‘Truth Sandwich’ Text Format.” British Journal of Health Psychology 28 (4): 1000–1010. https://doi.org/10.1111/bjhp.12665.

Kozyreva, Anastasia, Stephan Lewandowsky, and Ralph Hertwig. 2020. “Citizens Versus the Internet: Confronting Digital Challenges With Cognitive Tools.” Psychological Science in the Public Interest 21 (3): 103–56. https://doi.org/10.1177/1529100620946707.

Kugler, Katharina G., and Peter T. Coleman. 2020. “Get Complicated: The Effects of Complexity on Conversations over Potentially Intractable Moral Conflicts.” Negotiation and Conflict Management Research 13 (3): 211–30. https://doi.org/10.1111/ncmr.12192.

Landry, Alexander P., Jonathan W. Schooler, Robb Willer, and Paul Seli. 2022. “Reducing Explicit Blatant Dehumanization by Correcting Exaggerated Meta-Perceptions.” Social Psychological and Personality Science 14 (4): 407–18. https://doi.org/10.1177/19485506221099146.

Le, Long Hoang, Phuong Ai Hoang, and Hiep Cong Pham. 2023. “Sharing Health Information across Online Platforms: A Systematic Review.” Health Communication 38 (8): 1550–62. https://doi.org/10.1080/10410236.2021.2019920.

Lederach, John. 2003. Little Book of Conflict Transformation: Clear Articulation Of The Guiding Principles By A Pioneer In The Field. Original ed. edition. New York: Good Books.

Ledford, Christy J.W., Lauren A. Cafferty, Justin X. Moore, Courtney Roberts, Ebony B. Whisenant, Alejandra Garcia Rychtarikova, and Dean A. Seehusen. 2022. “The Dynamics of Trust and Communication in COVID-19 Vaccine Decision Making: A Qualitative Inquiry.” Journal of Health Communication 27 (1): 17–26. https://doi.org/10.1080/10810730.2022.2028943.

Lee, Jiyoung, Ji Won Kim, and Hee Yun Lee. 2023. “Unlocking Conspiracy Belief Systems: How Fact-Checking Label on Twitter Counters Conspiratorial MMR Vaccine Misinformation.” Health Communication 38 (9): 1780–92. https://doi.org/10.1080/10410236.2022.2031452.

Lees, Jeffrey, and Mina Cikara. 2020. “Inaccurate Group Meta-Perceptions Drive Negative out-Group Attributions in Competitive Contexts.” Nature Human Behaviour 4 (3): 279–86. https://doi.org/10.1038/s41562-019-0766-4.

Levendusky, Matthew. 2023. Our Common Bonds: Using What Americans Share to Help Bridge the Partisan Divide. Chicago Studies in American Politics. Chicago, IL: University of Chicago Press. https://press.uchicago.edu/ucp/books/book/chicago/O/bo190265728.html.

Levendusky, Matthew S. 2018. “Americans, Not Partisans: Can Priming American National Identity Reduce Affective Polarization?” The Journal of Politics 80 (1): 59–70. https://doi.org/10.1086/693987.

Linden, Sander van der, Anthony Leiserowitz, and Edward Maibach. 2018. “Scientific Agreement Can Neutralize Politicization of Facts.” Nature Human Behaviour 2 (1): 2–3. https://doi.org/10.1038/s41562-017-0259-2.

Lohmann, Roger, and Jon Van Til. 2011. Resolving Community Conflicts and Problems: Public Deliberation and Sustained Dialogue. Columbia University Press. https://doi.org/10.7312/lohm15168.

Longmire, Natalie H., and David A. Harrison. 2018. “Seeing Their Side versus Feeling Their Pain: Differential Consequences of Perspective-Taking and Empathy at Work.” Journal of Applied Psychology 103 (8): 894–915. https://doi.org/10.1037/apl0000307.

Magee, Lucia, Felicity Knights, Doug G. J. Mckechnie, Roaa Al-bedaery, and Mohammad S. Razai. 2022. “Facilitators and Barriers to COVID-19 Vaccination Uptake among Ethnic Minorities: A Qualitative Study in Primary Care.” PLOS ONE 17 (7): e0270504. https://doi.org/10.1371/journal.pone.0270504.

Mansbridge, Jane, and Audrey Latura. 2016. “The Polarization Crisis in the United States and the Future of Listening.” In Strong Democracy in Crisis: Promise or Peril?, edited by Trevor Norris, 29–54. London: Rowman & Littlefield. https://rowman.com/ISBN/9781498533621/Strong-Democracy-in-Crisis-Promise-or-Peril.

Marchal, Nahema. 2022. “‘Be Nice or Leave Me Alone’: An Intergroup Perspective on Affective Polarization in Online Political Discussions.” Communication Research 49 (3): 376–98. https://doi.org/10.1177/00936502211042516.

Markowitz, David M., Timothy R. Levine, Kim B. Serota, and Alivia D. Moore. 2023. “Cross-Checking Journalistic Fact-Checkers: The Role of Sampling and Scaling in Interpreting False and Misleading Statements.” PLOS ONE 18 (7): e0289004. https://doi.org/10.1371/journal.pone.0289004.

Martinez-Ebers, Valerie, Brian Robert Calfano, and Regina Branton. 2021. “Bringing People Together: Improving Intergroup Relations via Group Identity Cues.” Urban Affairs Review 57 (1): 104–27. https://doi.org/10.1177/1078087419853390.

Massey, Philip M., Matthew D. Kearney, Michael K. Hauer, Preethi Selvan, Emmanuel Koku, and Amy E. Leader. 2020. “Dimensions of Misinformation About the HPV Vaccine on Instagram: Content and Network Analysis of Social Media Characteristics.” Journal of Medical Internet Research 22 (12): e21451. https://doi.org/10.2196/21451.

McGlynn, Joseph, Maxim Baryshevtsev, and Zane A. Dayton. 2020. “Misinformation More Likely to Use Non-Specific Authority References: Twitter Analysis of Two COVID-19 Myths.” Harvard Kennedy School Misinformation Review 1 (3). https://doi.org/10.37016/mr-2020-37.

McGrew, Sarah. 2020. “Learning to Evaluate: An Intervention in Civic Online Reasoning.” Computers & Education 145 (February):103711. https://doi.org/10.1016/j.compedu.2019.103711.

McGrew, Sarah, and Joel Breakstone. 2022. “Civic Online Reasoning Across the Curriculum: Developing and Testing the Efficacy of Digital Literacy Lessons.” https://doi.org/10.25740/dd707pp9195.

Melchior, Cristiane, and Mírian Oliveira. 2022. “Health-Related Fake News on Social Media Platforms: A Systematic Literature Review.” New Media & Society 24 (6): 1500–1522. https://doi.org/10.1177/14614448211038762.

Mernyk, Joseph S., Sophia L. Pink, James N. Druckman, and Robb Willer. 2022. “Correcting Inaccurate Metaperceptions Reduces Americans’ Support for Partisan Violence.” Proceedings of the National Academy of Sciences 119 (16): e2116851119. https://doi.org/10.1073/pnas.2116851119.

Miller, Patrick R., and Pamela Johnston Conover. 2015. “Why Partisan Warriors Don’t Listen: The Gendered Dynamics of Intergroup Anxiety and Partisan Conflict.” Politics, Groups, and Identities 3 (1): 21–39. https://doi.org/10.1080/21565503.2014.992795.

Morosoli, Sophie, Peter Van Aelst, and Patrick van Erkel. 2022. “To Convince, to Provoke or to Entertain? A Study on Individual Motivations behind Engaging with Conspiracy Theories Online.” Convergence 28 (4): 1030–59. https://doi.org/10.1177/13548565221105792.

Mosleh, Mohsen, Cameron Martel, Dean Eckles, and David Rand. 2021. “Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–13. CHI ’21. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3411764.3445642.

Munger, Kevin. 2017. “Tweetment Effects on the Tweeted: Experimentally Reducing Racist Harassment.” Political Behavior 39 (3): 629–49. https://doi.org/10.1007/s11109-016-9373-5.

———. 2021. “Don’t @ Me: Experimentally Reducing Partisan Incivility on Twitter.” Journal of Experimental Political Science 8 (2): 102–16. https://doi.org/10.1017/XPS.2020.14.

Muradova, Lala. 2021. “Seeing the Other Side? Perspective-Taking and Reflective Political Judgements in Interpersonal Deliberation.” Political Studies 69 (3): 644–64. https://doi.org/10.1177/0032321720916605.

Murthy, Vivek H. 2021. “Confronting Health Misinformation: The U.S. Surgeon General’s Advisory on Building a Healthy Information Environment.” Washington, D.C.: Office of the Surgeon General of the United States.

Nassetta, Jack, and Kimberly Gross. 2020. “State Media Warning Labels Can Counteract the Effects of Foreign Misinformation.” Harvard Kennedy School Misinformation Review, October. https://doi.org/10.37016/mr-2020-45.

National Academies of Sciences, Engineering, and Medicine and Committee on the Science of Science Communication: A Research Agenda. 2017. Communicating Science Effectively: A Research Agenda. Division of Behavioral and Social Sciences and Education, Committee on the Science of Science Communication. Washington (DC): National Academies Press (US). http://www.ncbi.nlm.nih.gov/books/NBK425710/.

Norris, Trevor, ed. 2016. Strong Democracy in Crisis: Promise or Peril? Lexington Books. https://rowman.com/ISBN/9781498533614/Strong-Democracy-in-Crisis-Promise-or-Peril.

Oh, Hyun Jung, and Hyegyu Lee. 2019. “When Do People Verify and Share Health Rumors on Social Media? The Effects of Message Importance, Health Anxiety, and Health Literacy.” Journal of Health Communication 24 (11): 837–47. https://doi.org/10.1080/10810730.2019.1677824.

Oschatz, Corinna, and Caroline Marker. 2020. “Long-Term Persuasive Effects in Narrative Communication Research: A Meta-Analysis.” Journal of Communication 70 (4): 473–96. https://doi.org/10.1093/joc/jqaa017.

Ou, Mengxue, Han Zheng, Hye Kyung Kim, and Xiaoyu Chen. 2023. “A Meta-Analysis of Social Media Fatigue: Drivers and a Major Consequence.” Comput. Hum. Behav. 140 (C). https://doi.org/10.1016/j.chb.2022.107597.

Paluck, Elizabeth Levy, Roni Porat, Chelsey S. Clark, and Donald P. Green. 2021. “Prejudice Reduction: Progress and Challenges.” Annual Review of Psychology 72 (Volume 72, 2021): 533–60. https://doi.org/10.1146/annurev-psych-071620-030619.

Panizza, Folco, Piero Ronzani, Carlo Martini, Simone Mattavelli, Tiffany Morisseau, and Matteo Motterlini. 2022. “Lateral Reading and Monetary Incentives to Spot Disinformation about Science.” Scientific Reports 12 (1): 5678. https://doi.org/10.1038/s41598-022-09168-y.

Peng, Wei, Sue Lim, and Jingbo Meng. 2023. “Persuasive Strategies in Online Health Misinformation: A Systematic Review.” Information, Communication & Society 26 (11): 2131–48. https://doi.org/10.1080/1369118X.2022.2085615.

Pennycook, Gordon, Jonathon McPhetres, Yunhao Zhang, Jackson G. Lu, and David G. Rand. 2020. “Fighting COVID-19 Misinformation on Social Media: Experimental Evidence for a Scalable Accuracy-Nudge Intervention.” Psychological Science, June. https://doi.org/10.1177/0956797620939054.

Pennycook, Gordon, and David G. Rand. 2022. “Accuracy Prompts Are a Replicable and Generalizable Approach for Reducing the Spread of Misinformation.” Nature Communications 13 (1): 2333. https://doi.org/10.1038/s41467-022-30073-5.

Petersen, Michael Bang, Alexander Bor, Frederik Jørgensen, and Marie Fly Lindholt. 2021. “Transparent Communication about Negative Features of COVID-19 Vaccines Decreases Acceptance but Increases Trust.” Proceedings of the National Academy of Sciences 118 (29): e2024597118. https://doi.org/10.1073/pnas.2024597118.

Pettigrew, Thomas F., and Linda R. Tropp. 2008. “How Does Intergroup Contact Reduce Prejudice? Meta-Analytic Tests of Three Mediators.” European Journal of Social Psychology 38 (6): 922–34. https://doi.org/10.1002/ejsp.504.

Pink, Sophia L., James Chu, James N. Druckman, David G. Rand, and Robb Willer. 2021. “Elite Party Cues Increase Vaccination Intentions among Republicans.” Proceedings of the National Academy of Sciences 118 (32): e2106559118. https://doi.org/10.1073/pnas.2106559118.

rajadesingan, ashwin, Carolyn Duran, Paul Resnick, and Ceren Budak. 2021. “‘Walking Into a Fire Hoping You Don’t Catch’: Strategies and Designs to Facilitate Cross-Partisan Online Discussions.” Proc. ACM Hum.-Comput. Interact. 5 (CSCW2): 393:1-393:30. https://doi.org/10.1145/3479537.

Rathje, Steve, James K He, Jon Roozenbeek, Jay J Van Bavel, and Sander van der Linden. 2022. “Social Media Behavior Is Associated with Vaccine Hesitancy.” PNAS Nexus 1 (4): pgac207. https://doi.org/10.1093/pnasnexus/pgac207.

Rathje, Steve, Jon Roozenbeek, Jay J. Van Bavel, and Sander van der Linden. 2023. “Accuracy and Social Motivations Shape Judgements of (Mis)Information.” Nature Human Behaviour 7 (6): 892–903. https://doi.org/10.1038/s41562-023-01540-w.

Rathje, Steve, Jay J. Van Bavel, and Sander van der Linden. 2021. “Out-Group Animosity Drives Engagement on Social Media.” Proceedings of the National Academy of Sciences 118 (26): e2024292118. https://doi.org/10.1073/pnas.2024292118.

Rivera, Yonaira M., Meghan B. Moran, Johannes Thrul, Corinne Joshu, and Katherine C. Smith. 2022. “When Engagement Leads to Action: Understanding the Impact of Cancer (Mis)Information among Latino/a Facebook Users.” Health Communication 37 (9): 1229–41. https://doi.org/10.1080/10410236.2021.1950442.

Rogers, Carl Ransom. 1957. Active Listening. Chicago, IL: Industrial Relations Center, The University of Chicago.

Rogers, Carl Ransom, and Richard Evans Farson. 2021. Active Listening. New York: Mockingbird Press.

Roozenbeek, Jon, and Sander van der Linden. 2019. “Fake News Game Confers Psychological Resistance against Online Misinformation.” Palgrave Communications 5 (1): 1–10. https://doi.org/10.1057/s41599-019-0279-9.

Roozenbeek, Jon, Sander van der Linden, Beth Goldberg, Steve Rathje, and Stephan Lewandowsky. 2022. “Psychological Inoculation Improves Resilience against Misinformation on Social Media.” Science Advances 8 (34): eabo6254. https://doi.org/10.1126/sciadv.abo6254.

Santoro, Erik, and David E. Broockman. 2022. “The Promise and Pitfalls of Cross-Partisan Conversations for Reducing Affective Polarization: Evidence from Randomized Experiments.” Science Advances 8 (25): eabn5515. https://doi.org/10.1126/sciadv.abn5515.

Saveski, Martin, Nabeel Gillani, Ann Yuan, Prashanth Vijayaraghavan, and Deb Roy. 2021. “Perspective-Taking to Reduce Affective Polarization on Social Media.” arXiv. https://doi.org/10.48550/arXiv.2110.05596.

Schmid, Philipp, Katrine Bach Habersaat, and Noni E. MacDonald. 2017. “How to Respond to Vocal Vaccine Deniers in Public: Best Practice Guidance.” Copenhagen, Denmark: World Health Organization, Regional Office for Europe. https://iris.who.int/bitstream/handle/10665/343301/WHO-EURO-2017-2899-42657-59427-eng.pdf.

Schmid, Philipp, and Cornelia Betsch. 2019. “Effective Strategies for Rebutting Science Denialism in Public Discussions.” Nature Human Behaviour 3 (9): 931–39. https://doi.org/10.1038/s41562-019-0632-4.

Schoch-Spana, Brunson, Ravi, Taylor, Trotochaud, and Hosangadi. 2022. “The CommuniHealth Playbook: How to Spur on Your Local Community Health Sector.” CommuniHealth Coalition. Baltimore, Maryland: Johns Hopkins Center for Health Security. https://centerforhealthsecurity.org/sites/default/files/2023-01/2022117-communihealth-playbook.pdf.

Scudder, Mary F. 2022. “Measuring Democratic Listening: A Listening Quality Index.” Political Research Quarterly 75 (1): 175–87. https://doi.org/10.1177/1065912921989449.

Seo, Haeseung, Aiping Xiong, Sian Lee, and Dongwon Lee. 2022. “If You Have a Reliable Source, Say Something: Effects of Correction Comments on COVID-19 Misinformation.” Proceedings of the International AAAI Conference on Web and Social Media 16 (May):896–907. https://doi.org/10.1609/icwsm.v16i1.19344.

Seo, Hyunjin, Joseph Erba, Darcey Altschwager, and Mugur Geana. 2019. “Evidence-Based Digital Literacy Class for Older, Low-Income African-American Adults.” Journal of Applied Communication Research 47 (2): 130–52. https://doi.org/10.1080/00909882.2019.1587176.

Seo, Hyunjin, Yuchen Liu, Muhammad Ittefaq, Fatemeh Shayesteh, Ursula Kamanga, and Annalise Baines. 2022. “International Migrants and Coronavirus Disease 2019 Vaccinations: Social Media, Motivated Information Management, and Vaccination Willingness.” DIGITAL HEALTH 8 (January):20552076221125972. https://doi.org/10.1177/20552076221125972.

Shelby, Ashley, and Karen Ernst. 2013. “Story and Science: How Providers and Parents Can Utilize Storytelling to Combat Anti-Vaccine Misinformation.” Human Vaccines & Immunotherapeutics 9 (8): 1795–1801. https://doi.org/10.4161/hv.24828.

Sirin, Cigdem V., Nicholas A. Valentino, and José D. Villalobos. 2016. “Group Empathy Theory: The Effect of Group Empathy on US Intergroup Attitudes and Behavior in the Context of Immigration Threats.” The Journal of Politics 78 (3): 893–908. https://doi.org/10.1086/685735.

Spitt, Amy. 2019. “Does the ‘Backfire Effect’ Exist—and Does It Matter for Factcheckers?” Full Fact (blog). March 20, 2019. https://fullfact.org/blog/2019/mar/does-backfire-effect-exist/.

Swire-Thompson, Briony, Joseph DeGutis, and David Lazer. 2020. “Searching for the Backfire Effect: Measurement and Design Considerations.” Journal of Applied Research in Memory and Cognition 9 (3): 286–99. https://doi.org/10.1016/j.jarmac.2020.06.006.

Talaifar, Sanaz, Michael D. Buhrmester, Özlem Ayduk, and William B. Swann. 2021. “Asymmetries in Mutual Understanding: People With Low Status, Power, and Self-Esteem Understand Better Than They Are Understood.” Perspectives on Psychological Science 16 (2): 338–57. https://doi.org/10.1177/1745691620958003.

Thomas, Stephen B., Sandra Crouse Quinn, Meg Jordan, Elsie Essien, Surayyah Khan, Maggie Daly, Sanjana Ravi, Michael Brown, Katrina Randolph, and Frederico Spry. 2022. “Strengthening Health Promotion Through Sustained Hyperlocal Community Engagement.” CommuniVax 2.0: Strategies for Standing Up, Strengthening, and Sustaining the Local Community Health Sector. Baltimore, Maryland: Johns Hopkins Center for Health Security.

Thornton, Courtney, and Jennifer A. Reich. 2022. “Black Mothers and Vaccine Refusal: Gendered Racism, Healthcare, and the State.” Gender & Society 36 (4): 525–51. https://doi.org/10.1177/08912432221102150.

Transue, John E. 2007. “Identity Salience, Identity Acceptance, and Racial Policy Attitudes: American National Identity as a Uniting Force.” American Journal of Political Science 51 (1): 78–91. https://doi.org/10.1111/j.1540-5907.2007.00238.x.

Tully, Melissa, Leticia Bode, and Emily K. Vraga. 2020. “Mobilizing Users: Does Exposure to Misinformation and Its Correction Affect Users’ Responses to a Health Misinformation Post?” Social Media + Society 6 (4): 2056305120978377. https://doi.org/10.1177/2056305120978377.

“Vaccine Messaging Guide.” 2020. Yale Institute for Global Health, UNICEF Demand for Immunization. https://www.unicef.org/media/93661/file/Vaccinemessagingguide.pdf.

Vicol, Dora-Olivia. 2020. “Media and Information Literacy: Lessons from Interventions around the World.” Full Fact, Africa Check and Chequeado. https://fullfact.org/media/uploads/media-information-literacy-lessons.pdf.

Voelkel, Jan G., Michael N. Stagnaro, James Y. Chu, Sophia L. Pink, Joseph S. Mernyk, Chrystal Redekopp, Isaias Ghezae, et al. 2024. “Megastudy Testing 25 Treatments to Reduce Antidemocratic Attitudes and Partisan Animosity.” Science 386 (6719): eadh4764. https://doi.org/10.1126/science.adh4764.

Vraga, Emily K., and Leticia Bode. 2018. “Using Expert Sources to Correct Health Misinformation in Social Media.” Science Communication 39 (5): 621–45. https://doi.org/10.1177/1075547017731776.

Vraga, Emily K., Sojung Claire Kim, and John Cook. 2019. “Testing Logic-Based and Humor-Based Corrections for Science, Health, and Political Misinformation on Social Media.” Journal of Broadcasting & Electronic Media 63 (3): 393–414. https://doi.org/10.1080/08838151.2019.1653102.

Walter, Nathan, John J. Brooks, Camille J. Saucier, and Sapna Suresh. 2021. “Evaluating the Impact of Attempts to Correct Health Misinformation on Social Media: A Meta-Analysis.” Health Communication 36 (13): 1776–84. https://doi.org/10.1080/10410236.2020.1794553.

Walter, Nathan, Jonathan Cohen, R. Lance Holbert, and Yasmin Morag. 2020. “Fact-Checking: A Meta-Analysis of What Works and for Whom.” Political Communication 37 (3): 350–75. https://doi.org/10.1080/10584609.2019.1668894.

Wineburg, Sam, Joel Breakstone, Sarah McGrew, Mark Smith, and Teresa Ortega. 2021. “Lateral Reading on the Open Internet.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network. https://doi.org/10.2139/ssrn.3936112.

Wineburg, Sam, and Sarah McGrew. 2017. “Lateral Reading: Reading Less and Learning More When Evaluating Digital Information.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network. https://doi.org/10.2139/ssrn.3048994.

Wintersieck, Amanda, Kim Fridkin, and Patrick Kenney. 2021. “The Message Matters: The Influence of Fact-Checking on Evaluations of Political Messages.” Journal of Political Marketing 20 (2): 93–120. https://doi.org/10.1080/15377857.2018.1457591.

Wojcieszak, Magdalena, and R Kelly Garrett. 2018. “Social Identity, Selective Exposure, and Affective Polarization: How Priming National Identity Shapes Attitudes Toward Immigrants Via News Selection.” Human Communication Research 44 (3): 247–73. https://doi.org/10.1093/hcr/hqx010.

Wojcieszak, Magdalena, and Nuri Kim. 2016. “How to Improve Attitudes Toward Disliked Groups: The Effects of Narrative Versus Numerical Evidence on Political Persuasion.” Communication Research 43 (6): 785–809. https://doi.org/10.1177/0093650215618480.

Wojcieszak, Magdalena, and Benjamin R. Warner. 2020. “Can Interparty Contact Reduce Affective Polarization? A Systematic Test of Different Forms of Intergroup Contact.” Political Communication 37 (6): 789–811. https://doi.org/10.1080/10584609.2020.1760406.

Wojcieszak, Magdalena, Stephan Winter, and Xudong Yu. 2020. “Social Norms and Selectivity: Effects of Norms of Open-Mindedness on Content Selection and Affective Polarization.” Mass Communication and Society 23 (4): 455–83. https://doi.org/10.1080/15205436.2020.1714663.

Wood, Michele M., Dennis S. Mileti, Hamilton Bean, Brooke F. Liu, Jeannette Sutton, and Stephanie Madden. 2018. “Milling and Public Warnings.” Environment and Behavior 50 (5): 535–66. https://doi.org/10.1177/0013916517709561.

Yeomans, Michael, Julia Minson, Hanne Collins, Frances Chen, and Francesca Gino. 2020. “Conversational Receptiveness: Improving Engagement with Opposing Views.” Organizational Behavior and Human Decision Processes 160 (September):131–48. https://doi.org/10.1016/j.obhdp.2020.03.011.

Yeomans, Michael, Maurice E. Schweitzer, and Alison Wood Brooks. 2022. “The Conversational Circumplex: Identifying, Prioritizing, and Pursuing Informational and Relational Motives in Conversation.” Current Opinion in Psychology 44 (April):293–302. https://doi.org/10.1016/j.copsyc.2021.10.001.

Young, Dannagal G., Kathleen Hall Jamieson, Shannon Poulsen, and Abigail Goldring. 2018. “Fact-Checking Effectiveness as a Function of Format and Tone: Evaluating FactCheck.Org and FlackCheck.Org.” Journalism & Mass Communication Quarterly 95 (1): 49–75. https://doi.org/10.1177/1077699017710453.

Zaki, Jamil. 2020. “Integrating Empathy and Interpersonal Emotion Regulation.” Annual Review of Psychology 71 (Volume 71, 2020): 517–40. https://doi.org/10.1146/annurev-psych-010419-050830.

Zerback, Thomas, Florian Töpfl, and Maria Knöpfle. 2021. “The Disconcerting Potential of Online Disinformation: Persuasive Effects of Astroturfing Comments and Three Strategies for Inoculation against Them.” New Media & Society 23 (5): 1080–98. https://doi.org/10.1177/1461444820908530.

Zhao, Xinyan, and Stephanie J. Tsang. 2024. “How People Process Different Types of Health Misinformation: Roles of Content Falsity and Evidence Type.” Health Communication 39 (4): 741–53. https://doi.org/10.1080/10410236.2023.2184452.

Tags: , , ,