Algorithms increasingly mediate critical aspects of daily life across healthcare, hiring, and social media, shaping user experiences through automated decision-making processes. Yet, algorithmic bias, the systematic disadvantaging of certain groups through automated systems, has been widely documented across a variety of algorithms. Thus this study addresses the gap in understanding how users may respond to four epistemic categories of algorithm bias, depending on whether it exists or not, and is perceived or not. We apply the information systems concept of workarounds to characterize potential user responses to these categories of algorithmic bias. Then, we apply the Human–AI Interaction Theory of Interactive Media Effects to understand how users may detect bias through cue routes and develop workaround strategies through action routes. Our theoretical framework proposes how users’ detection and workarounds may vary based on the four categories of bias. Understanding these adaptive strategies provides crucial insights for developing inclusive technologies and fostering algorithmic literacy, ultimately enhancing the ongoing negotiation between human agency and technological constraint in digital societies.
