Proliferation of misinformation poses significant challenges in contemporary society, necessitating efficient strategies for its identification and mitigation. Automated fact-checking systems might prove effective, but they face challenges, particularly in charged contexts where prior beliefs are likely to influence responses to fact-checks. Data from two studies where participants were given a piece of gun-control misinformation and an automated fact-checker correction (N = 1,372) illustrate the nuanced interplay between prior beliefs, trust in artificial intelligence (AI), and the perceived accuracy of fact-checking systems in shaping (a) post-correction misinformation endorsement, and (b) post-correction perceptions of system quality. Study 1 examined default perceptions of system accuracy and demonstrated a high degree of variability in those perceptions; when fact-checked by such a system, people’s prior beliefs predicted continued belief after the correction and post-correction perceptions of the fact-check system. Study 2 directly manipulated the purported accuracy of the system. When automated fact-checkers were said to have an accuracy level close to current expectations of existing AI systems (67%), people continued to believe misinformation more to the extent it was consistent with prior beliefs. This pattern was attenuated when participants were told that the fact-checker was highly (97%) accurate. Similarly, prior beliefs related more strongly to post-correction perceptions of system reliability when accuracy information was provided and especially when the system was described as not highly accurate. This research demonstrates biases in reactions to automated fact-checkers and highlights the importance of accounting for individual beliefs and perceived system characteristics in designing scalable interventions.
