Social Science Research Council Research AMP Just Tech
Citation

AI’s Hippocratic Oath

Author:
Sharma, Chinmayi
Publication:
Washington University Law Review
Year:
2025

Diagnosing diseases, creating artwork, offering companionship, analyzing data, and securing our infrastructure—artificial intelligence (“AI”) does it all. But it does not always do it well. AI can be wrong, biased, and manipulative. It has convinced people to commit suicide, starve themselves, arrest innocent people, discriminate based on race, radicalize in support of terrorist causes, and spread misinformation. All without betraying how it functions or what went wrong.

A burgeoning body of scholarship enumerates AI harms and proposes solutions. This Article diverges from that scholarship to argue that the heart of the problem is not the technology but its creators: AI engineers who either do not know how to, or are told not to, build better systems. Today, AI engineers act at the behest of self-interested companies pursuing profit, not safe, socially beneficial products. On its best day, the government lacks the agility and expertise to solve the AI problem on its own. On its worst day, the government falls prey to industry’s siren song. Litigation does not fare much better; plaintiffs have had little success challenging technology companies in court.

This Article proposes another way: professionalizing AI engineering. Require AI engineers to obtain licenses to build commercial AI products, push them to collaborate on scientifically-supported, domain-specific technical and ethical standards, and charge them with policing themselves. Professionalization’s formal institutions can minimize the risk of technical errors, while its power to transform an individual engineer’s desire to do good into a culture of social responsibility can minimize the risk of ethical errors. This Article’s proposal addresses AI harms at their inception, influencing the very engineering decisions that give rise to them in the first place. By wresting control over information and system design away from companies and handing it to AI engineers, professionalization engenders trustworthy AI by design. Beyond recommending the specific policy solution of professionalization, this Article seeks to shift the discourse on AI away from an emphasis on light-touch, ex post solutions that address already-created products to a greater focus on ex ante controls that precede AI development. We have used this playbook before in fields requiring a high level of expertise where a duty to the public welfare must trump business motivations. What if, like doctors, AI engineers also vowed to do no harm?