How can social media platforms effectively fight the spread of fake news and other forms of misinformation online? One possibility is to use newsfeed algorithms to systematically downrank content from low-quality sources. Although this proposal is essentially entirely algorithmic, the challenge lies not in the details of the algorithm, but instead in how to identify the quality of news sources – that is, the real challenge for this approach is a social science challenge. Here we assess the potential utility of crowdsourcing for addressing this challenge. Are laypeople good judges of news source quality? Or are they uninformed, or motivated to “game” the crowdsourcing mechanism in order to advance their partisan agenda? To shed light on thesequestions, we conducted a survey experiment with a sample of Americans that is nationally representative on key demographics in which participants indicated their trust in a range of mainstream, hyper-partisan, and fake news sites. To study the tendency of people to game the system, half of the participants were told that their responses would be shared with social media companies to help inform ranking algorithms. Consistent with prior results on crowdsourcing, we find that our sample of laypeople were quite successful in discriminating between high and low quality content: they provide much higher trust ratings to mainstream sources than hyper-partisan or fake news sources, and their ratings are highly correlated with the ratings of professional checkers. Critically, we show that this successful discernment was unaffected by informing them that their responses would influence ranking algorithms, despite the knowledge manipulation increasing the polarization of trust ratings. Our results have important implications on the deployment of decentralized,scalable approaches to fighting misinformation online.