Later this year, the US Supreme Court will rule on a case with profound implications for society and technology. The case centres on Nohemi Gonzalez, a US student who in 2015 was shot and killed by the Islamist terrorist group ISIS in Paris. Her father claims in the suit that Google, which owns the video platform YouTube, should be held responsible for permitting the platform’s recommender algorithms to promote content that radicalized the terrorists who killed Gonzalez. The final ruling, expected by the end of June, could make digital platforms liable for their algorithms’ recommendations.
Whatever decision the court makes, the case highlights an urgent question: how can societies govern adaptive algorithms that continually change in response to people’s behaviour? YouTube’s algorithms, which recommend videos through the actions of billions of users, could have shown viewers terrorist videos on the basis of a combination of people’s past behaviour, overlapping viewing patterns and popularity trends. Years of peer-reviewed research shows that algorithms used by YouTube and other platforms have recommended problematic content to users even if they never sought it out 1 . Technologists struggle to prevent this.
Humans and algorithms work together — so study them together