Social Science Research Council Research AMP Just Tech
Citation

Strengthening international cooperation on AI

Author:
Kerry, Cameron F.; Meltzer, Joshua P.; Renda, Andrea; Engler, Alex; Fanni, Rosanna
Year:
2021

Since 2017, when Canada became the first country to adopt a national AI strategy, at least 60 countries have adopted some form of policy for artificial intelligence (AI). The prospect of an estimated boost of 16 percent, or US$13 trillion, to global output by 2030 has led to an unprecedented race to promote AI uptake across industry, consumer markets, and government services. Global corporate investment in AI has reportedly reached US$60 billion in 2020 and is projected to more than double by 2025.

At the same time, the work on developing global standards for AI has led to significant developments in various international bodies. These encompass both technical aspects of AI (in standards development organizations (SDOs) such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE) among others) and the ethical and policy dimensions of responsible AI. In addition, in 2018 the G-7 agreed to establish the Global Partnership on AI, a multistakeholder initiative working on projects to explore regulatory issues and opportunities for AI development. The Organization for Economic Cooperation and Development (OECD) launched the AI Policy Observatory to support and inform AI policy development. Several other international organizations have become active in developing proposed frameworks for responsible AI development.

In addition, there has been a proliferation of declarations and frameworks from public and private organizations aimed at guiding the development of responsible AI. While many of these focus on general principles, the past two years have seen efforts to put principles into operation through fully-fledged policy frameworks. Canada’s directive on the use of AI in government, Singapore’s Model AI Governance Framework, Japan’s Social Principles of Human-Centric AI, and the U.K. guidance on understanding AI ethics and safety have been frontrunners in this sense; they were followed by the U.S. guidance to federal agencies on regulation of AI and an executive order on how these agencies should use AI. Most recently, the EU proposal for adoption of regulation on AI has marked the first attempt to introduce a comprehensive legislative scheme governing AI.

Global corporate investment in AI has reportedly reached US$60 billion in 2020 and is projected to more than double by 2025.

In exploring how to align these various policymaking efforts, we focus on the most compelling reasons for stepping up international cooperation (the “why”); the issues and policy domains that appear most ready for enhanced collaboration (the “what”); and the instruments and forums that could be leveraged to achieve meaningful results in advancing international AI standards, regulatory cooperation, and joint R&D projects to tackle global challenges (the “how”). At the end of this report, we list the topics that we propose to explore in our forthcoming group discussions.
Why international cooperation on AI is important

Even more than many domains of science and engineering in the 21st century, the international AI landscape is deeply collaborative, especially when it comes to research, innovation, and standardization. There are several reasons to sustain and enhance international cooperation.

AI research and development is an increasingly complex and resource-intensive endeavor, in which scale is an important advantage. Cooperation among governments and AI researchers and developers across national boundaries can maximize the advantage of scale and exploit comparative advantages for mutual benefit. An absence of international cooperation would lead to competitive and duplicative investments in AI capacity, creating unnecessary costs and leaving each government worse off in AI outcomes. Several essential inputs used in the development of AI, including access to high-quality data (especially for supervised machine learning) and large-scale computing capacity, knowledge, and talent, benefit from scale.
International cooperation based on commonly agreed democratic principles for responsible AI can help focus on responsible AI development and build trust. While much progress has been made aligning on responsible AI, there remain differences—even among Forum for Cooperation on AI (FCAI) participants. The next steps in AI governance involve translating AI principles into policy, regulatory frameworks, and standards. These will require deeper understanding of how AI works in practice and working through the operation of principles in specific contexts and in the face of inevitable tradeoffs, such as may arise when seeking AI that is both accurate and explainable. Effective cooperation will require concrete steps in specific areas, which the recommendations of this report aim to suggest.
When it comes to regulation, divergent approaches can create barriers to innovation and diffusion. Governments’ efforts to boost domestic AI development around concepts of digital sovereignty can have negative spillovers, such as restrictions on access to data, data localization, discriminatory investment, and other requirements. Likewise, diverging risk classification regimes and regulatory requirements can increase costs for businesses seeking to serve the global AI market. Varying governmental AI regulations may necessitate building variations of AI models that can increase the work necessary to build an AI system, leading to higher compliance costs that disproportionately affect smaller firms. Differing regulations may also force variation in how data sets are collected and stored, creating additional complexity in data systems and reducing the general downstream usefulness of the data for AI. Such additional costs may apply to AI as a service as well as hardware-software systems that embed AI solutions, such as autonomous vehicles, robots, or digital medical devices. Enhanced cooperation is key to create a larger market in which different countries can try to leverage their own competitive advantage. For example, the EU seeks to achieve a competitive advantage in “industrial AI”: EU enterprises could exploit that AI without the prospect of having to engage in substantial reengineering to meet requirements of another jurisdiction.
Aligning key aspects of AI regulation can enable specialized firms in AI development to thrive. Such companies generate business by developing expertise in a specialized AI system, then licensing these to other companies as one part of a broader tool. As AI becomes more ubiquitous, complex stacks of specialized AI systems may emerge in many sectors. A more open global market would allow a company to take advantage of digital supply chains, using a single product with a natural language model built in Canada, a video analysis algorithm trained in Japan, and network analysis developed in France. Enabling global competition by such specialized firms will encourage healthier markets and more AI innovation.
Enhanced cooperation in trade is essential to avoid unjustified restrictions to the flow of goods and data, which would substantially reduce the prospective benefits of AI diffusion. While the strategic importance of data and sovereignty has in many countries given rise to legitimate industrial policy initiatives aimed at mapping and reducing dependencies on the rest of the world, protectionist measures can jeopardize global cooperation, impinge on global value chains, and negatively affect consumer choice, thereby reducing market size and overall incentives to invest in meaningful AI solutions.
Enhanced cooperation is needed to tap the potential of AI solutions to address global challenges. No country can “go it alone” in AI, especially when it comes to sharing data and applying AI to tackle global challenges like climate change or pandemic preparedness. The governments involved in the FCAI share interests in deploying AI for global social, humanitarian, and environmental benefit. For example, the EU is proposing to employ AI to support its Green Deal, and the G-7 and GPAI have called for harnessing AI for U.N. Sustainable Development Goals. Collaborative “moonshots” can pool resources to leverage the potential of AI and related technologies to address key global problems in domains such as health care, climate science, or agriculture at the same time as they provide a way to test approaches to responsible AI together.
Cooperation among likeminded countries is important to reaffirm key principles of openness and protection of democracy, freedom of expression, and other human rights. The risks associated with the unconstrained use of AI solutions by techno-authoritarian regimes— such as China’s—expose citizens to potential violations of human rights and threaten to split cyberspace into incompatible technology stacks and fragment the global AI R&D process.

The fact that international cooperation is an element of most governments’ AI strategies indicates that governments appreciate the connection between AI development and collaboration across borders. This report is about concrete ways to realize this connection.