This study investigates how national AI literacy and its operating environment jointly shape the frequency of AI-related incidents and hazards across countries. Drawing on cross-national panel data from 62 countries between 2014 and 2024, we integrate AI incident reports from the OECD AI Incidents Monitor with indicators from the Tortoise Global AI Index. AI literacy is proxied by the AI Talent Index, capturing the human capital available for AI development and deployment, while the AI Operating Environment Index reflects institutional and regulatory conditions supporting responsible AI use. Using a correlated random effects negative binomial model, we find that countries with higher AI literacy are associated with higher expected counts of reported AI-related incidents, consistent with greater exposure and capture. However, this relationship is significantly mitigated in countries with more mature operating environments. The interaction between AI talent and governance demonstrates a complementary risk-mitigation effect: in environments with robust safeguards, higher AI literacy leads to lower expected incident counts compared to environments with high literacy but weak governance. These findings suggest that building human capital without adequate governance may increase societal risk, and that effective AI policy must align investments in talent with regulatory infrastructure. Our results underscore the need for integrated national strategies to promote both capability and accountability in the era of rapidly advancing AI.
