Existing research on AI hallucinations has largely framed them as technical flaws undermining user experience, with little attention to their role as emergent risk signals in sociocognitive processes. To address this gap, this study applies the Social Amplification of Risk Framework to examine how hallucination-driven risk perception relates to perceptions of media narratives and government performance. Using survey data from 811 generative-AI users in mainland China, we test a dual-path mediation model exploring the associations between perceived hallucination, perceptions of media hype and governance ineffectiveness, and subsequent risk perception and information behaviors. Results show an asymmetrical pattern: hallucination is linked to elevated risk through the media hype pathway, which is further associated with higher information-sharing intention, whereas the governance pathway shows no direct association unless moderated by perceived behavioral control. Neither pathway is related to enhanced verification, revealing a decoupling of risk perception from epistemic vigilance. Implications for theorizing risk processing in opaque AI environments and for designing responsive governance strategies are discussed.
