This article explores how artificial intelligence is shaping public health communication in African contexts, with particular focus on cultural representation and epistemic justice. Concentrating on maternal health and vaccine hesitancy, the study compares AI-generated health messages with traditional campaign materials from Nigeria and Kenya. It draws on a comparative narrative analysis of 120 health messages, including 80 from ministries of health and local organizations, and 40 generated by two AI platforms: SARAH.AI, developed by the World Health Organization, and ChatGPT, a widely used large language model. The analysis examines how these messages reflect or exclude local knowledge systems, gendered perspectives, and culturally grounded narrative structures. Findings reveal that while AI-generated messages incorporated local metaphors and references more frequently than expected, they often lacked depth, contained language errors, and struggled with contextual nuance. Traditional materials, although more accurate, often reinforced biomedical authority and offered limited integration of community knowledge. The study also highlights differences between the two AI systems. SARAH.AI tended to produce medically precise but overly templated content, while ChatGPT generated more dynamic narratives that were occasionally culturally and visually misaligned. These findings raise broader concerns about how AI tools encode authority and either challenge or reproduce existing inequities in global health narratives. Results also underscore the need for participatory, culturally responsive design in AI for health communication, highlighting epistemic risks of deploying AI tools trained on Western-dominant data in Global South contexts and the value of interdisciplinary approaches across global health, data ethics, and communication studies.
