We formalize predictive optimization, a category of decision-making algorithms that use machine learning (ML) to predict future outcomes of interest about individuals. For example, pre-trial risk prediction algorithms such as COMPAS use ML to predict whether an individual will re-offend in the future. Our thesis is that predictive optimization raises a distinctive and serious set of normative concerns that render it presumptively illegitimate. To test this, we review 387 reports, articles, and web pages from academia, industry, non-profits, governments, and modeling contests, and find many real-world examples of predictive optimization. We select eight particularly consequential examples as case studies. Simultaneously, we develop a set of normative and technical critiques that challenge the claims made by the developers of these applications—in particular, claims of increased accuracy, efficiency, and fairness. Our key finding is that these critiques apply to each of the applications, are not easily evaded by redesigning the systems, and thus challenge the legitimacy of their deployment. We argue that the burden of evidence for justifying why the deployment of predictive optimization is not harmful should rest with the developers of the tools. Based on our analysis, we provide a rubric of critical questions that can be used to deliberate or contest the legitimacy of specific predictive optimization applications.