With the rise of generative AI, information retrieval systems are evolving — shifting from search engines that retrieve and suggest links to content, to platforms that generate answers using AI. This presents new challenges for assigning responsibility for the information these systems deliver to the public. This paper examines the legal and ethical questions of accountability that arise in the automation of information retrieval, using disputes over Google’s Autocomplete feature as an early and instructive example of automated prediction in search. We review three defamation cases brought against Google in Australia, Hong Kong, and Germany and explore how courts have grappled with questions of responsibility for harm when algorithmic systems are implicated in the production and dissemination of information. We argue that these cases offer valuable insights for the governance of contemporary AI-driven information retrieval systems, particularly those using large language models (LLMs). We consider responsibility for harm prevention at both the individual and organisational level and assess the epistemic responsibility of Google as a provider of socio-technical infrastructures that shape public knowledge.
