The growing prominence of deepfakes in the last several years has triggered an ongoing discussion of authenticity online and of the distinction between fact and fiction. Deepfakes, which use deep learning involving AI to generate videos or fake events, are highly realistic synthetic media that can be abused to threaten an organization’s brand; to impersonate leaders and financial officers; and to enable access to networks, communications, and sensitive information. The proliferation of deepfakes foreshadows a dubious, uncertain era defined by a fractured geopolitical landscape, ideological echo chambers, and mutual distrust. AI-based machine learning can amplify disinformation rather than dispelling it. The future online environment should reflect how a healthy society naturally operates rather than being driven by an algorithm that manipulates our attention to boost corporate profits. Although social media represents a legitimate ideal of democratizing information, this endeavor has been hijacked and subverted by the algorithmic logic and ad-driven model. To fulfill that normative aspiration, AI systems should ensure transparency, provide fair results, establish accountability, and operate under a clearly defined data governance policy.
