News Item

The scientific process, and how to handle misinformation | Columbia Journalism Review

After Donald Trump was elected, in 2016, misinformation—and its more toxic cousin, disinformation—began to feel like an increasingly urgent social and political emergency. Concerns about Russian trolls meddling in American elections were soon joined by hoaxes and conspiracy theories involving covid-19. Even those who could agree on how to define mis- and disinformation, however, debated what to do about the information itself: Should Facebook and Twitter remove “fake news” and disinformation, especially about something as critical as a pandemic? Should they “deplatform” repeated disinfo spreaders such as Trump and his ilk, so as not to infect others with their dangerous delusions? Should federal regulations require the platforms to take such steps?

After coming under pressure, both from the general public and from President Biden and members of Congress, Facebook and Twitter—and, to a lesser extent, YouTube—started actively removing such content. They began by banning the accounts of people such as Trump and Alex Jones, and later started blocking or “down-ranking” covid-related misinformation that appeared to be deliberately harmful. Is this the best way to handle the problem of misinformation? Some argue that it is, and that “deplatforming” people like Trump—or even blocking entire platforms, such as the right-wing Twitter clone Parler—works, in the sense that it quiets serial disinformers and removes misleading material. But not everyone agrees.

[…]

Source: The scientific process, and how to handle misinformation | Columbia Journalism Review