Social Science Research Council Research AMP Just Tech
Citation

Misinformation and Algorithmic Bias

Author:
Shin, Donghee
Year:
2024

What happens if the data fed to AI are biased? What happens if the response of a chatbot spreads misinformation? Unlike many people hope, AI is as biased as humans are. Bias can originate from various venues, including but not limited to the design and unintended or unanticipated use of the algorithm or algorithmic decisions about the way data are coded, framed, filtered, or analyzed to train machine learning. Algorithmic bias has been widely seen in advertising, content recommendations, and search engine results. Algorithmic prejudice has been found in cases ranging from political campaign outcomes to the proliferation of fake news and misinformation. It has also surfaced in health care, education, and public service, aggravating existing societal, socioeconomic, and political biases. These algorithm-induced biases can exert negative impacts on a range of social interactions, ranging from unintended privacy infringements to solidifying societal biases of gender, race, ethnicity, and culture. The significance of the data used in training algorithms should not be underestimated. Humans should play a part in the datafication of algorithms, as preventing the spread of misinformation is difficult by technology alone, especially considering the rate at which information can spread online.