The rapid advancement of generative artificial intelligence (AI) has introduced both opportunities and challenges in the fight against misinformation. This scoping review synthesizes recent empirical studies to explore the dual role of generative AI—particularly large language models (LLMs)—in the generation, detection, mitigation, and impact of misinformation. Analyzing 24 empirical studies, our review suggests that LLMs can generate highly convincing misinformation, often exploiting cognitive biases and ideological leanings of the audiences, while also demonstrating the ability to detect false claims and enhance users’ resistance to misinformation. Mitigation efforts show mixed results, with personalized corrections proving effective but safeguards inconsistently applied. Additionally, exposure to AI-generated misinformation was found to reduce trust and influence decision-making. This review underscores the need for standardized evaluation metrics, interdisciplinary collaboration, and stronger regulatory measures to ensure the responsible use of generative AI in the information ecosystem.
