Social Science Research Council Research AMP Just Tech
Citation

The responsibility to protect online: Lessons from R2P and the politics of Western‐Centricity in online harms regulation

Author:
Stilinovic, Milica; Gray, Joanne; Hutchinson, Jonathan
Publication:
Policy & Internet
Year:
2024

The pursuit of reducing online harms has become an integral part of both internet and digital platform governance, with states around the world seeking to implement policies and regulatory mechanisms to address the issue at breakneck speed. In this editorial, we suggest the race to have nation states protect citizens from harm online is reminiscent of the era of humanitarian interventionism in international relations. From the 1990s, the urgency for states and international organisations to intervene into global conflicts was sparked by the atrocities of genocide in Rwanda in 1994, Bosnia (Srebrenica) in 1995, and in Sudan, in the region of Darfur, from 2003, when key institutions had failed to act (Grünfeld & Vermeulen, 2009, p. 222). Back then, the controversies and cost of inaction—namely, a death toll of 500,000–800,000 in Rwanda, ignoring the pleas of peacekeepers in Srebrenica resulting in the deaths of some 8000 Muslim men and boys, and repeatedly failing to protect citizens in Darfur from state-orchestrated violence resulting in the death of approximately 200,000 people—spurred the international community to establish an international governance mechanism to guide state and institutional intervention into conflicts to protect global citizens from harm (Sigwebela, 2016, p. iv). In 2005, the UN World Summit unanimously accepted a set of principles—coined the Responsibility to Protect (R2P) in 2005—aimed at protecting civilians from mass atrocity crimes such as genocide, ethnic cleansing and crimes against humanity (see General Assembly, 2005). The principles were promising yet controversial. The execution of R2P has often been described as an exercise in the political strong arming of states with opposing views, resulting in a stalemate, or inaction during a crisis, and diplomatic tensions (Spies & Dzimiri, 2011, p. 33). Importantly, questions continue to arise regarding the true intent of the R2P, whether it embodies genuine desire to protect people from harm or is in reality a Western-centric modality for enforcing conformity among states globally (Rotmann et al., 2014, p. 361). We argue there are important lessons to be learned from R2P relevant to online safety policy initiatives. Currently, online safety is pursued through a patchwork of national and international laws, guidelines, and industry self-regulation, with varying degrees of effectiveness and enforcement. The controversies of regulating online harm arise from similar questions surrounding R2P. Specifically, who is doing the protecting? Who is being protected? When should protective actions be taken? What is the intent of online harms regulation? Like R2P, online harms interventions face a political quagmire, one that potentially risks inaction in the context of global problems particularly if shaped exclusively by Western-centric norms. Historically, policymaking at the international level has often been influenced by what we refer to as a Western threshold of morality, or the guiding principles defined by Western values. Western-centricity impacts the mechanisms of policymaking in several ways. Here we focus on two in particular. The first is that, oftentimes, Western values tend to address the needs of Western societies, rather than the needs of a wider international community. This narrow focus is problematic when addressing a global phenomenon such as online harms which may often require cooperation and transparency among states to eradicate. The second issue arises from defining online harms through a Western lens that fails to address specific threats in ways that are translatable to other contexts. We argue that policymakers must pay attention to the experiences of both developed and developing states when defining online harms to minimise the impact of Western-centricity on regulation. Furthermore, Western-centric policies should not be used to set global norms and standards in online harm protections that ultimately govern how digital platforms work in a range of different countries as they are not context specific and impose Western values. Western-centricity is a term that describes the influence of hegemonic Western culture, often applied to understanding its impact on politics at a time of increased globalisation (Sune, 2024, p. 1). This influence incorporates judgements and values based on Western moralities. In policymaking, morality can be seen as ‘no less than the legal sanction of right and wrong, the validation of a particular set of basic values’ (Mooney, 1999, p. 675). We argue that these values develop a threshold, one that divides what is perceived as right or wrong in accordance to Western moral principles and codes. In the realm of policymaking—particularly within a globalised political universe built on values of cooperation and transparency among states—the knife’s edge between right and wrong can define what or who is accepted. In the context of online safety laws, it can impact who gets to protect and what is protected. The Western threshold of morality can be evidenced in both the principles and mechanisms of R2P and online harms regulation. Western-centricity within the R2P framework has long been critiqued (Glover, 2011; Hao, 2015; Stefan, 2017, p. 90). Among these criticisms is the desire for R2P to be an emancipatory principle—or what critical scholars refer to a quest for freedom, the breaking of chains away from norms which both constrict and violate human rights (Bohman, 2005)—as an inherently Western approach (Glover, 2011, p. 10; Naz & Ahmad, 2022, p. 111). Furthermore, critiques often focus on the operational elements and the outcomes of R2P as Western-derived. Specifically, R2P is structured around three pillars. The first (see Šimonovic, 2017) is the responsibility of the state to protect its citizens from heinous crimes against humanity. The second rests responsibility on the shoulders of the international community to assist states to protect their citizens from harms. The third pillar relies on the international community to intervene when states fail to protect their citizens. Put simply, the end game for R2P is military intervention against the perpetrating actor. Within this context, criticism towards Western-centricity focuses on decision-making processes surrounding intervention. Critics argue that Western-centricity can be evidenced in the decision to intervene, the actors who make the decision to use force, when force is used and against whom (Spies & Dzimiri, 2011, p. 33). These decision-making processes are perceived by some as an exercise in the strong-arming of states who are reluctant to use force from powers willing to intervene (Magu, 2021). Member states with voting or veto clout often base decisions on their political loyalties and interests, with some having the power to propose or resist intervention depending on their political agenda (Heise & Schuck, 2017). Hence, while the former non-aligned movement rallied for intervention in Rwanda, the United States was strictly opposed (Adebajo, 2018). While Western forces intervened in Kosovo in 1999 and Libya in 2011, the mechanisms of R2P remain dormant in states such as Syria and Sudan. Thus, Western-centricity, while providing R2P aspirational benchmarks of emancipation, can often be the lynchpin for inaction leaving parts of the globe insecure and without protection from harm. For online harms, Western-centricity is highlighted via the states that are taking charge and developing principles that may influence online harms regulation globally. Currently, while most states are discussing or pledging a commitment to developing laws regarding particular facets of online harms, Western states—such as the United Kingdom, Canada and Australia—and Western-derived supranational entities—such as the European Union—remain at the forefront of these regulatory measures. Each of these examples incorporate Western-centric modes of execution such as the institutionalisation of responsibility to protect, along with higher levels of human agency to articulate harm. For example, the UK’s Online safety Act 2023 (c.50), passed in parliament towards the end of last year, and places the responsibility to protect users from online harms on service providers (platforms), and provides Ofcom—the UK’s Office of Communications—the power to block websites that contain what is perceived to be harmful content. Australia’s Online safety Act 2021 (C01) is touted as a world-first adult cyber abuse scheme for Australians 18 years and older by the State’s e-safety Commissioner (n.d.a). In the same fashion as the United Kingdom, Australia’s Act places an onus on service providers to regulate harmful content and grants the e-Safety Commissioner the power to enforce the Act (see Australian Government, 2022, sect. 4). Currently, Canada’s bill C-63 (S.210)—or the proposed Online Harms Act—aims to hold platforms to account for the harmful content they host (see Government of Canada, 2024). The platform-focused regulatory measure aims to establish the Digital Safety Commission of Canada to implement and enforce the Act. The Bill also grants greater public oversight and agency to engage in policymaking discourse and ‘flag’ perceived harmful content. Specifically—much like its Western predecessors—the Bill aims to produce a cycle of accountability. Namely, according to the policymaking principle of Safety By Design (e-Safety Commissioner, n.d.b)—or a principle that places user rights and safety at the heart of policy design—the Bill places the onus on service providers, user empowerment, transparency and accountability. Should the service provider fail to provide a safe environment for its users, a regulatory body would assume the responsibility, followed by the State. Thus, within the context of Canada’s C-63, the responsibility to act falls on service providers, users, newly forged governance bodies and (at the last stages) the government. This element is important to highlight as, much like other regulatory measures within the online harms space, the Bill presents liberalist notions of human rights and agency, free market determinism and minimal government intervention. While aspirational, these principles in terms of execution may not be translatable to other state contexts or virtual worlds, such as nondemocratic states, fledgling democracies who do not have the infrastructure to maintain free markets or effective government interventions, or autocracies who refuse to comply to liberal notions of governance and human rights. It could be argued that the scope and aims of C-63—much like the United Kingdom and Australian examples—are derived from Western-centric values and needs. While it is entirely within the norms of sovereignty and lawmaking for states to derive their sense of protection and what needs to be regulated, who is responsible to act and what is being protected, the internet is a global phenomenon and, so, solutions need some level of cooperation among states to truly address harms. With Western regulation often being the compass for developing regulation in other parts of the globe, the ability to adopt these principles elsewhere has not been addressed, potentially resulting in a lack of coherency and transparency among states and service providers and, as seen with R2P, the regulatory strong arming towards states and platforms who refuse to comply. Thus, while C-63 and other examples of online harms regulation are a significant development in the field, they are also an example of harms regulation encapsulating Western-centric values, leaving parts of the globe lost in translation to protect their citizens, or sovereignty, from harm. Western-centricity is evidenced in current online harm policy formulations both in practice and rhetorically. For example, the aforementioned increase of human agency to report on harms evident in Canada’s proposed C-63, along with Australia’s Online Safety approach provide users with a toolkit to report on online harms represents liberalist notions of institutionalisation and higher levels of human agency. In addition, the definitional elements of ‘harms’ themselves can be evidenced with the inclusion of terms such as ‘terrorism’ and ‘hate speech’ to online regulation, as examples of politically loaded and malleable terms that are often subject to political interest. This definitional dilemma was also apparent in the negotiating elements of R2P. Namely, R2P aims to protect civilians against mass atrocity crimes, including genocide. The U.N. 1948 Convention on the Prevention and Punishment of the Crime of Genocide, articulates the definition of genocide in Article II—as a crime committed with the intent to destroy a national, ethnic, racial or religious group, in whole or in part UN General Assembly (1948). However, while mass atrocity crimes, such as the aforementioned genocide, appear as a clear-cut threshold for intervention, such terms continue to be debated in practice (Straus, 2001, p. 349). Simon (1996, p. 244) highlights that the definition of groups are narrowcast—based solely on national, ethnic, racial or religious groups—and that there is an assumption that group membership should be permanent to garner protection, thus leading to contrasting views on genocide. Huttenbach (2004, p. 2) highlights the contrast governance-derived definitions of genocide (such as in the 1948 convention) versus the definition articulated by governments, resulting in both tension and ambiguity. Often, the definitional contestation of the term genocide stems from subjective notions. As articulated by Boghossian (2010, p. 69), ‘Whenever a large number of people are killed, it is common for someone to suggest that the event qualifies as genocide and for someone else to dispute that classification’. The contested nature of terms incorporated into harms can, arguably, create a situation in which responsibility gives way to political interest—or debating whether to act or not based on political alliances and agendas—resulting in inaction in the face of mass atrocity violence (Bellamy & Dunne, 2016, p. 4). In the same way that R2P can experience political stalemate via the weaponisation of its negotiating stages, so too can attempts to regulate online harm within a global context. Put simply, one state’s definition of ‘harm’ may not translate to another (Jiang et al., 2021). The definition of ‘harm’ has been a subject of open debate within academic circles, among platforms and service providers (Bartolo & Matamoros Fernandez, 2023, p. 1). While attempts have been made by states and supranational institutions—such as the United Kingdom and the European Union—to provide robust frameworks for harm definitions, the processes of defining harms have remained opaque (Bartolo & Matamoros Fernandez, 2023, p. 1). Definitions have also varied, depending on how policymakers considered the degree of risk involved (Jiang et al., 2021, p. 1). The definition of harm incorporated within Western legal frameworks is multifaceted and spans from a strong focus on child protection, to hate speech and terrorism. The term terrorism has long been established by Critical Terrorism Studies scholars (Bryan et al., 2011; Greene, 2017; Schmid, 2004, p. 375) as politically loaded with problematic outcomes. Namely, as articulated by the U.N. states do not agree with the definition of terrorism or who should or shouldn’t be labelled terrorists, resulting in a lack of cohesion when addressing incidences of terrorism globally. Furthermore, the act of labelling a ‘terrorist’ has been noted to be based on the political subjectivity of states (Jackson, 2008). Labelling can, therefore, be weaponised in accordance with state interest and the taking down of content labelled ‘terrorist’ material can produce the same harmful outcomes (Zelin, 2023, p. 560). So too could definitions of online harms be employed within specific contexts to attack marginalised groups (Watkin, 2023, p. 530). States also define hate speech according to what their interests perceive to be harmful. While in Australia, within a multicultural context—the definition of hate speech is presented as beliefs and identity-based discrimination (see e-Safety Commissioner, 2019, p. 4), whereas in China, hate speech is framed as a national security issue (Fu, 2019, p. 1). Another layer of incompatibility is added when considering the definitions presented within a service provider/platform’s terms of service—as entities that transcend borders, who need to regulate what is produced by users residing in various political and legal contexts. Thus, these contrasts can result in lack of translatability, transparency, and the development of insecurity, with certain societies and individuals excluded from protection. Do the complexities of online harms mean that harms should not be addressed? No. In a reality in which the interplay between users, platforms and governments is increasingly digitised, it is imperative to regulate harmful content that can impact on the wellbeing of states and their citizens. However, a compartmentalisation of the definition of harms, one that incorporates the experiences beyond the West and considers harms as a global phenomenon, one that transcends borders and the Western moral threshold is equally imperative. While the field of online harms regulation is still emerging, there is a wide pool of research that considers the implication of regulatory measures—not exclusively within the online harms space—that consider the aspect of harm from various facets of the globe—some of which are included in this issue of Policy & Internet. In this volume, authors from an array of disciplines explore harms posed by increased digitalisation, prescribe proposals to remedy issues ranging from content moderation, the advancement of AI and cyber operations within conflict zones, or highlight potential gaps in addressing harms. In Moderating Borderline Content while Respecting Fundamental Values, Macdonald & Vaughan, (2023) explore the parameters of so-called lawful but awful content within the context of extremist/terrorist content and propose three principles of content moderation. The first is definitional clarity. Second is necessity and proportionality. Third is transparency. The authors (Macdonald & Vaughan, 2023) argue that while a number of platforms now publish their content moderation policies and transparency data reports, these largely focus on violative, not borderline content. Moreover, there remain questions around access to data for independent researchers and transparency at the level of the individual user. Mandate to Overblock? Understanding the impact of the European Union’s Article 17 on copyright content moderation on YouTube (Dergacheva & Katzenbach, 2024) presents the results of a study measuring possible overblocking due to copyright moderation and changes in the diversity of cultural products supply on YouTube in two EU member states comparable in size and population, Germany and France. The findings show that during the period examined, 2019 to 2022, significant differences were identified between Germany and France in terms of the takedowns of videos from categories prone to copyright moderation. In ‘To say report it, well, it seems a little useless’: Evaluating Australian expectations of online service providers and reducing online child sexual exploitation, Francis et al. (2024) present the findings of a survey of 482 Australian adults regarding their expectations of technology companies and governments in relation to key issues of online child protection. The results suggest strong demands for greater action by online service providers (OSPs) against sexual exploitation, and for governments to legally enshrine some obligations for OSPs, as well as concern about the privacy of innocent users and about the data handling and cybersecurity of governments and technology companies. Core Concerns: The Need for a Governance Framework to Protect Global Internet Infrastructure (Broeders & Sukumar, 2024) explores the digital dynamics of the war in Ukraine and threats to global Internet infrastructure from geopolitically motivated cyber operations. The paper, while noting that states increasingly acknowledge the need to protect the public core of the internet, argues that norms and international law are still ill-equipped to regulate damaging cyber operations, given unsettled questions regarding the sovereignty of states over global Internet infrastructure, and the precise scope of their existing international obligations towards its protection. In The unjust burden of digital inclusion for low-income migrant parents, Notley and Aziz (2024) consider the significant digital inclusion disparities between low- and high-income households across countries, along with the lack of in-depth research about the relationship between digital and social participation in low-income family households. Presenting long-term ethnographic research with low-income, migrant family households in the most culturally diverse region of Australia—Western Sydney, the authors argue that household digital inclusion is perceived as necessary and important by parents—but also as a burden that has social, financial and emotional dimensions. Flew (2024) in Mediated Trust, the Internet and Artificial Intelligence: Ideas, Interests, Institutions, Futures addresses the question of trust in communication, or mediated trust, with regard to the historical evolution of the Internet and, more recently, debates around the impacts of artificial intelligence (AI). The author proposes a ‘Three I’s’ framework of ideas, interests, and institutions as a way of understanding how and why current proposals for greater regulation of digital platforms counterpose questions around credibility and social licence for digital tech giants against a dominant set of ideas around the Internet as a privileged domain of free speech. Last, in Social Imaginaries of Digital Technology in South Korea during the COVID-19 Pandemic, Yoon (2024) examines common themes identified in discourses about digital technology-driven responses to COVID-19 in South Korea. By examining how digital technology is represented and thematised in policy and news discourses, the study explores how particular modes of society and societal order are circulated and particular visions of the postpandemic society emerge. In addition, explores the role of the government that extends institutional forces to disseminate particular visions of the postpandemic society while addressing the media’s response to the government-circulated dominant social imaginary.