The internet offers connection and opportunity, but for many women and marginalized groups, it is also a space where Technology-Facilitated Gender-Based Violence (TFGBV) is pervasive. A 2024 report by Snapchat found that nearly one in four users across six countries—including Australia, India, and the U.S.—were victims of sextortion, a form of TFGBV. Globally, 38% of women report experiencing some form of online violence, according to the Economist Intelligence Unit. Beyond inflicting individual harm, this trend creates a “chilling effect” that silences women’s voices, reducing diversity in public discourse and pushing women and girls out of spaces where they could exercise agency and leadership.
The United Nations Population Fund (UNFPA) defines TFGBV as “an act of violence perpetrated by one or more individuals that is committed, assisted, aggravated and amplified in part or fully by the use of information and communication technologies or digital media against a person on the basis of gender.” Common forms of TFGBV include doxxing, cyberstalking, hate speech, and more. The abuse of women did not begin with the rise of social media; yet, the design choices of social media platforms built to optimize engagement and attention have enabled, amplified, and accelerated this abuse. Platforms reward immediacy, emotional impact, and virality—qualities that cause harmful content to thrive. For example, emotionally charged content designed to provoke outrage, such as targeted harassment campaigns or hate speech, receives disproportionately high engagement on social media platforms due to their algorithmic ranking systems.
Platforms have historically been slow to adopt safety features, often acting only after significant public outcry. Twitter introduced its report button in 2013, seven years after its launch, while Instagram didn’t include a mute button until 2018. This timeline reflects a broader industry issue: in the corporate race to “move fast and break things,” protecting vulnerable users has often been an afterthought for technology executives.
There is evidence that regulatory and human-centered design approaches can lead to safer online environments. Resources such as the Integrity Institute’s Focus on Features lay the groundwork for such claims by illustrating the extensive connections between platform design and digital harms and providing actionable guidance for reducing abuse by rethinking features at their root. The United Kingdom’s Age-Appropriate Design Code (AADC) turned this concept into action. By targeting specific platform features and affordances—such as default privacy settings and restricted data collection— this legislation has significantly improved the experiences of social media users under the age of 18 in the UK. These initiatives illustrate the evidence for and power of addressing product design to reduce digital harm. Furthermore, as noted in “No Excuse for Abuse: What Social Media Companies Can Do Now to Combat Online Harassment and Empower Users”, platforms’ current approaches to mitigating abuse often fall short because they rely on retroactive measures, placing the burden on victims to protect themselves and report harm. This underscores the opportunity—and necessity—of acknowledging the clear link between digital harms and design features before damage is caused.
In contrast to existing solutions—such as content moderation—that address harm reactively, they place the burden of safety on victims, this paper advocates for a proactive, design-focused approach that embeds safety and user empowerment into social media platform design.