Twitter finally started fact checking Trump’s tweets in May, but their fact-checking is unlikely to have much effect given the president’s parallel attacks against the media and platforms. Such attacks reduce trust among the very audience following his disinformation, and credibility of the source with the audience is vital to their believing the fact checking.
Two years after the Cambridge Analytica scandal, propaganda, data misuse, disinformation, and strategic influence present some of the most complex and rapidly evolving challenges for researchers, civil society, and policymakers in our time. But how well are we tackling this problem, and what’s left to do? Efforts at the policy level to regulate, legislate data privacy, and pressure platforms to remove noncompliant content are making baby steps forward but still have a long way to go.
The public response to Cambridge Analytica and disinformation online has been almost entirely reactive. Most noticeably, we have seen researchers and journalists focus on identifying examples of disinformation and notifying platforms. Tens of millions of dollars are spent by platforms on disinformation research tracking online campaign messaging dissemination and consumption on social media. While these efforts can be valuable, a focus solely on tracking content will tend toward solutions such as content removal. This approach ignores the issue of what motivates platforms to act and may miss other ideas and responsible parties.
The tracking approach often observes content without deriving and recommending solutions. It tackles only the visible surface the problem and is unable to identify the creators of content, the corporations, governments, or other organizations funding it, or their use and misuse of data. To fully understand the issues raised by digital influence campaigns, or develop new ways to respond to them, it is necessary to also focus on exposing and responding to the rapidly expanding digital influence industry.