The diffusion of misinformation has garnered considerable attention in our society. As algorithms have been considered one of the major drivers behind the spread and amplification of misinformation, it is useful to understand the effects of these algorithms on misinformation sharing and the manner in which they spread it. This chapter examines the psychological, cognitive, and social factors involved in the processing of misinformation people receive through algorithms and artificial intelligence. Modeling cognitive processes has long been of interest for understanding user reasoning, and many theories from different fields have been formalized into cognitive models. Drawing on theoretical insights from information processing theory with the concept of diagnosticity, it examines how perceived normative values influence a user’s perceived diagnosticity and likelihood of sharing information and whether explainability further moderates this relationship. The findings showed that users with a high heuristic processing of normative values and positive diagnostic perception were more likely to proactively discern misinformation. Users with a high cognitive ability to understand information were more likely to discern it correctly and less likely to share misinformation online. When exposed to misinformation through algorithmic recommendations, users’ perceived diagnosticity of misinformation can be predicted accurately from their understanding of normative values. This perceived diagnosticity would then positively influence the accuracy and credibility of the misinformation. With this focus on misinformation processing, this chapter provides theoretical insights and relevant recommendations for firms to be more resilient in protecting themselves from the detrimental impact of misinformation.
