There is considerable irony in this situation. I study digital technology and have for years spoken loudly and with increasing concern about the potential harms posed to our society, our democracy and, indeed, our children. YouTube, in particular, has a bad record. While platforms such as Facebook and Twitter have taken modest, if insufficient, steps toward addressing some of the more egregious harms caused by their products, YouTube’s response has been far more complacent. For example, while Facebook, and later Twitter, banned false claims of election victory after the 2020 US presidential election, YouTube let these damaging claims go viral. And in 2019, when the Canadian government required platforms to create a public archive of political ads during the federal election, Google, which owns the YouTube platform, simply pulled out of the political advertising market.
YouTube’s record is even thornier when it comes to kids’ content. On YouTube Kids (an age-restricted app designed specifically for children), violence against child characters, age-inappropriate sexualization and even recorded suicide attempts embedded in kids’ videos have been found by parents and journalists. Many of these videos have hundreds of millions of views. A recent controversy occurred when parents found popular kids’ shows being re-uploaded with spliced-in clips of a man joking about how to cut yourself. “Remember, kids,” he says in the video, “sideways for attention, longways for results.”
I strongly believe that better regulation is the solution to many of the challenges posed by digital platforms. The free market has provided an inadequate defence against the wide range of harms enabled by our poorly regulated digital infrastructure. However, the example of how kids use digital technologies presents an additional set of challenges. Should we only regulate the internet for adults, or do kids need additional protections?