This study explores how predictive artificial intelligence (AI) becomes institutionalized in public administration, using a wildfire risk-prediction system implemented by a Korean forestry agency as a case. Moving beyond technical performance, the study conceptualizes predictive AI as a dynamic policy instrument that reshapes administrative routines, authority, and accountability. Drawing on Scott’s institutional theory, Salamon’s policy-instrument framework, and Ziewitz’s algorithmic governance model, it employs a mixed-methods design combining surveys and semi-structured interviews with frontline officials. The findings reveal three insights. First, predictive AI influences decision-making by offering real-time quantitative justification, especially in field-level operations. Second, successful institutionalization depends on the alignment of regulatory (e.g., SOPs), normative (e.g., leadership support), and cognitive (e.g., trust) pillars. Third, organizational context—such as agency size and leadership orientation—mediates how these factors interact. Despite high technical accuracy, institutionalization remains uneven due to unclear accountability and weak integration mechanisms. The study identifies a structural tension between AI-driven efficiency and traditional accountability norms. Theoretically, it bridges policy-instrument theory and algorithmic governance within an institutional framework. Practically, it proposes design principles for responsible AI implementation in government. While focused on wildfire management, the framework is transferable to other predictive technologies in public administration.
