The Federal Trade Commission has indicated that it intends to regulate discriminatory AI products and services. This is a welcome development, but its true significance has not been appreciated to date. This Article argues that the FTC’s flexible authority to regulate ‘unfair and deceptive acts and practices’ offers several distinct advantages over traditional discrimination law when applied to AI. The Commission can reach a wider range of commercial domains, a larger set of possible actors, a more diverse set of harms, and a broader set of business practices than are currently covered or recognized by discrimination law. For example, while most discrimination laws can address neither vendors that sell discriminatory software to decision-makers nor consumer products that work less well for certain demographic groups than others, the Commission could address both. The Commission’s investigative and enforcement powers can also overcome many of the practical and legal challenges that have limited plaintiffs’ ability to successfully seek remedies under discrimination law. The Article demonstrates that the FTC has the existing authority to address the harms of discriminatory AI and offers a method for the Commission to tackle the problem, based on its existing approach to data security.