How One Doctor’s False Claim Was Used to Erase Atrocities in Syria | Arc Digital

By Caroline O.
May 29, 2018

Using the recent controversy over false information about chemical attacks in Syria spread on Twitter by the account @Thomas_Binder as a case study, Caroline O. unpacks how important it is to attend to human tendencies, as well as technological changes, that are motivating the spread of mis- and disinformation today.

Claiming to be a cardiologist, Twitter user @Thomas_Binder posted a tweet in the aftermath of the chemical attack in Syria last month accusing medical workers of faking a photo in which victims of the attack were pictured receiving life-saving care. Binder later admitted that the information in his tweet was wrong, but by the time he did so, the false claim had already been retweeted over ten thousand times and used to propagate a smear campaign against the volunteer rescue group known as the White Helmets.


The virality of Binder’s tweet provides important insight into the human factors involved in the diffusion of misinformation (this refers to incorrect information, without assigning intent on the part of those spreading it, unlike “disinformation,” which does imply intentional deception), showing how cognitive biases, ideological motives, social and cultural norms, and characteristics of the misinformation itself interact to fuel a vicious feedback loop. With so many headlines focused on automated accounts (“bots”), online advertisements, and algorithm manipulation, it’s easy to overlook the fact that the problem we are dealing with is, at its core, a human problem.

Source: How One Doctor’s False Claim Was Used To Erase Atrocities In Syria | Medium

Help inform the conversation
MediaWell relies on members of the public to submit articles, events, and research.