I argue that while social media platforms may recognize that it is important to assess users based on different identities, they do not acknowledge the reality of intersectionality, nor the ways in which their use of AI to target users and recommend content can propagate multiple forms of oppression. Thus, their AI-driven solutions and interventions may in fact exacerbate the problem. The chapter examines this problem and offers recommendations for how social media companies can incorporate an intersectional lens into their automated content moderation systems and how policymakers can support that approach.