Ethiopia’s artificial intelligence (AI) policy represents a significant step forward in national digital governance; our study has, however, identified gaps in areas such as linguistic justice and AI safeguards. We investigate the performance of large language models in detecting hate speech in Ethiopian languages—Amharic, Afaan Oromo, and Tigrigna—and their amenability to produce hate speech. Large language models are less effective at detecting hate speech in non-English contexts and can be easily manipulated to create hate speech, raising serious online safety concerns. Upon careful analysis of the policy, we propose ASPIRE, a series of recommendations for updating the policy to address these concerns: adapting policy to the digital sphere, strengthening linguistic inclusivity, preventing AI misuse, improving infrastructure, resourcing media literacy and training, and emphasising overlaps with hate speech governance. Failure to recognize online harms as integral to AI development leaves a policy vacuum that could undermine long-term development goals.
