Concerns have been raised over AI-generated deepfakes and their impact on democracy. Unlike earlier forms of disinformation relying on text or traditional video-editing techniques (cheapfakes), deepfakes employ artificial intelligence, provoking speculations that they may be even more persuasive and harder to debunk. Using an experiment with a multiple-message design (N = 2,085), we found that fake videos suggesting a sex, corruption, or prejudice scandal—but not text-only fakes—elicited substantial reputational damage for an innocent politician, regardless of whether the underlying technique was “cheap” or “deep.” This was visible in altered attitudes, emotions, and voting intentions. However, exposure to a journalistic fact-check substantially reduced and even eliminated the detrimental effects. These findings have important implications for our theoretical understanding related to the effects of and mitigation strategies for deepfakes. While clearly highlighting the significant persuasive potential of deepfakes (and visual disinformation in general), the present study paints a more nuanced picture than was previously possible.