In the afternoon of January 6, 2021, Facebook leadership announced they were “appalled by the violence at the Capitol today,” and were approaching the situation “as an emergency.” Facebook staff were searching for and taking down problematic content, such as posts praising the storming of the U.S. Capitol, calls to bring weapons to protests, and encouragement about the day’s events. At the same time, Facebook announced it would block then-President Donald Trump’s account – a decision the company is now, two years later, considering reversing.
How effective was Facebook’s approach? Not very, concludes a new, comprehensive analysis of more than 2 million posts from researchers at NYU Cybersecurity for Democracy. The researchers identified 10,811 posts removed from 2,551 U.S. news sources. Among their major findings:
Nearly 80 percent of potential engagement with posts eventually removed happened before they were down. This is because posts tend to get the most engagement – people commenting, “liking,” or otherwise interacting with information – soon after posting. The researchers estimated that despite the quick intervention, only 21% of predicted engagement was prevented.
Nearly a week after the attack, older posts published began coming down. This however disrupted less than one percent of predicted future engagement, because by then, nearly all the people who were going to engage with this content had already done so.
Facebook was more likely to remove posts by news sources known to spread misinformation, such as “Dan Bongino” on the right and “Occupy Democrats” on the left. During the period of the January 6 attacks and after, Facebook removed more content from sites classified as “far right” and “slightly right.”