Social media takes advantage of people’s predisposition to attend to threatening stimuli by promoting content in algorithms that capture attention. However, this content is often not what people expressly state they would like to see. We propose that social media companies should weigh users’ expressed preferences more heavily in algorithms. We propose modest changes to user interfaces that could reduce the abundance of threatening content in the online environment.