Social Science Research Council Research AMP Just Tech
Citation

Protecting society from AI misuse: when are restrictions on capabilities warranted?

Author:
Anderljung, Markus; Hazell, Julian; von Knebel, Moritz
Publication:
AI & SOCIETY
Year:
2025

Artificial intelligence (AI) systems will increasingly be used to cause harm as they grow more capable. In fact, AI systems are already starting to help automate fraudulent activities, violate human rights, create harmful fake images, and identify dangerous toxins. To prevent some misuses of AI, we argue that targeted interventions on certain capabilities will be warranted. These restrictions may include controlling who can access certain types of AI models, what they can be used for, whether outputs are filtered or can be traced back to their user, and the resources needed to develop them. We also contend that new restrictions on non-AI capabilities needed to cause harm will be required. For example, concerns about AI-enabled bioweapon acquisition have motivated efforts to introduce DNA synthesis screening. Though capability restrictions risk reducing use more than misuse (resulting in an unfavorable Misuse–Use Tradeoff), we argue that interventions on capabilities are warranted in some circumstances when other interventions are insufficient, the potential harm from misuse is high, and there are targeted interventions. We provide a taxonomy of interventions that can reduce AI misuse, focusing on the specific steps required for a misuse to cause harm (the Misuse Chain), and a framework to determine if an intervention is warranted. We exemplify our framework to three examples: predicting novel toxins, creating harmful images, and automating spear phishing campaigns.