As artificial intelligence continues to find its way into our daily lives, its propensity to interfere with human rights only gets more severe. With this in mind, and noting that the technology is still in its infant stages, Access Now conducts this preliminary study to scope the potential range of human rights issues that may be raised today or in the near future.
Many of the issues that arise in examinations of this area are not new, but they are greatly exacerbated by the scale, proliferation, and real-life impact that artificial intelligence facilitates. Because of this, the potential of artificial intelligence to both help and harm people is much greater than from technologies that came before. While we have already seen some of these consequences, the impacts will only continue to grow in severity and scope. However, by starting now to examine what safeguards and structures are necessary to address problems and abuses, the worst harms—including those that disproportionately impact marginalized people—may be prevented and mitigated.
There are several lenses through which experts examine artificial intelligence. The use of international human rights law and its well-developed standards and institutions to examine artificial intelligence systems can contribute to the conversations already happening, and provide a universal vocabulary and forums established to address power differentials.
Additionally, human rights laws contribute a framework for solutions, which we provide here in the form of recommendations. Our recommendations fall within four general categories: data protection rules to protect rights in the data sets used to develop and feed artificial intelligence systems; special safeguards for government uses of artificial intelligence; safeguards for private sector uses of artificial intelligence systems; and investment in more research to continue to examine the future of artificial intelligence and its potential interferences with human rights.
Our hope is that this report provides a jumping off point for further conversations and research in this developing space. We don’t yet know what artificial intelligence will mean for the future of society, but we can act now to build the tools that we need to protect people from its most dangerous applications. We look forward to continuing to explore the issues raised by this report, including through work with our partners as well as key corporate and government institutions.