This study proposed and tested a unified framework encompassing four explanations for public opinion on artificial intelligence (AI): trust, coping for perceived AI threat, media use, and knowledge. We conducted a quota survey of U.S. adults (N = 981) and analyzed the data using both linear regression and conditional random forest models. Together, the four explanations accounted for a substantial proportion of the variance in public opinions of AI. Trust emerged as the strongest predictor, followed by coping mechanisms, media use, and knowledge. Greater trust in AI corporations and science was associated with more positive opinions of AI. The negative association between perceived AI threats and public opinions was attenuated by higher levels of science efficacy. Increased social media use and liberal mass media use were each associated with more positive AI opinions. In contrast, higher science literacy predicted more negative views of AI. By empirically testing this unified framework, this study shed lights on factors that may be leveraged to foster public support for socially beneficial AI applications.
