While policymakers around the world are increasingly proposing policies aiming to prevent harm resulting from artificial intelligence (AI), approaches differ significantly. As such, this article surveys the use cases targeted by and key requirements of AI laws, as well as enforcement activity, in the European Union (EU), the United Kingdom (UK), China, Brazil, Russia and the United States (US)—the jurisdictions with the greatest volume of AI laws introduced. We additionally survey activity in South Korea, one of the few countries that has passed comprehensive AI legislation, and Singapore, which is taking an agile approach to AI regulation. We identify that many of the horizontal laws around the world that apply to multiple use cases generally take a risk-based approach that imposes stringent obligations on the systems with the greatest risks, although systems considered high-risk are inconsistent. On the other hand, vertical, sector-specific laws typically target human resource (HR) technologies, autonomous vehicles and generative AI. We find that key requirements typically relate to transparency and notification, accountability and fairness. Enforcement activity typically relates to violations concerning data protection, intellectual property (IP), discrimination and deceptive practices. We end by discussing the role of international efforts in promoting AI safety and providing recommendations for the same.
