Algorithmic decision-making systems don’t exist in a vacuum. To meaningfully assess their harms, we have to examine them in the organizational and political contexts in which they are deployed. In my view, the debate about whether we can make algorithms fair misses the mark in two ways. The lens of discrimination is inadequate to understand the plethora of harms from algorithmic decision making. And the statistical properties of algorithms tend to have at best a nebulous relationship with real-world outcomes for people. I advocate for a more ambitious study of fairness and justice in algorithmic decision making in which we attempt to model the sociotechnical system, not just the technical subsystem. The animating question becomes: “How should we design algorithmic bureaucracies?” This will require many shifts including letting go of neat, mathematically precise fairness definitions and embracing empirical social scientific methods. But the potential payoff is enormous in terms of a greater ability to model benefits and harms and much expanded design space for reform.
