Deepfakes are no longer a curiosity—they pose a real threat to financial stability. They can impersonate CEOs, fabricate corporate events, or simulate geopolitical crises, all with the power to trigger market turmoil in minutes. But while European Union (EU) law has pioneered regulation of financial markets and digital technologies, existing frameworks, such as directive on markets in financial instruments (MiFID II), regulation on markets in financial instruments (MiFIR), Market Abuse Regulation (MAR), the artificial intelligence (AI) Act, the Digital Services Act (DSA), and General Data Protection Regulation (GDPR), remain fragmented, reactive, and ill-suited to tackle deepfake-driven manipulation. This article models three scenarios of deepfake use in capital markets, exposing structural gaps in detection, attribution, and enforcement. It argues that the EU Capital Markets Regulation is conceptually unprepared for a high-velocity, non-textual disinformation. The article calls for legal and financial reforms, including explicit recognition of synthetic media in market manipulation law, real-time supervisory coordination, and integration of AI-based monitoring tools. By situating deepfakes within the macroprudential debate, the article contributes to a timely conversation on safeguarding financial stability in the digital era.
