Encrypted messaging applications (EMAs) that rely on end-to-end encryption (E2EE), like Signal,
Telegram, and WhatsApp, offer a level of intimacy and security that have made them remarkably
popular among activists and others who want to communicate without fear of government surveillance.
These qualities also make them a useful vector for disinformation: they offer a means of spreading
untraceable claims to users via trusted contacts in a secure environment. This policy brief argues that
successfully countering disinformation on EMAs does not require undermining this stronger form of
encryption.
Although EMAs typically end-to-end encrypt the content of private messages, they often do not encrypt
the metadata of those messages. Interventions based on that metadata show particular promise.
Metadata-based forwarding limits on WhatsApp, for instance, appear to have slowed the proliferation
of disinformation in India and elsewhere. Third-party evaluations of such approaches are needed to
develop and guide best practices for use on other platforms, particularly given criticism of, and broader
worry surrounding, WhatsApp’s use of said metadata.
Many EMAs offer channels for mass public broadcasts in addition to private messaging. By building
automated tools to monitor the flow of disinformation between mainstream platforms and public
channels on EMAs, counter-disinformation operations can craft targeted cross-platform interventions.
This is in line with the global push for increased accountability, transparency, and accessibility of tech
platforms to academics and journalists.
Disinformation campaigns on EMAs are successful primarily because of the intimacy and trust
they afford. Regulatory responses to disinformation EMAs should therefore target how that trust is
leveraged, rather than EMAs’ use of E2EE. For example, stricter advertising disclosure laws would
prevent “influence farms” coordinating on EMAs from spreading untraceable political messaging.