What can be done to combat political misinformation? One widely employed intervention involves attaching warnings to news stories that have been disputed by third-party fact-checkers. Here we demonstrate a hitherto unappreciated consequence of such warning: an “implied truth” effect whereby false stories that fail to get tagged are considered validated, and thus are seen as more accurate. Such an effect is particularly important given that it is much easier to produce misinformation than it is to debunk it. We first introduce a formal model showing that such an implied truth effect is the necessary consequence of Bayesian belief updating. In Study 1 (N = 5,271 MTurkers), we find that while warnings do lead to a modest reduction in perceived accuracy of false headlines relative to a control condition (particularly for politically concordant headlines), we also observed the hypothesized Implied Truth Effect: the presence of warnings caused untagged headlines to be seen as more accurate than in the control. In Study 2 (N = 1,568 MTurkers), we find the same effects in the context of decisions about which headlines to consider sharing on social media. We also find that attaching verifications to some true headlines – which removes the ambiguity about whether untagged headlines have not been checked or have been verified – eliminates, and in fact slightly reverses, the Implied Truth Effect. Together, these results challenge theories of motivated reasoning, while identifying a new challenge for the policy of using warning tags to fight misinformation.Note: A previous version of this working paper was titled “Assessing the effect of “disputed” warnings and source salience on perceptions of fake news accuracy”. To allow for a more detailed treatment of both issues, the source salience aspect of the previous manuscript (former Study 2) has been removed from this updated version and will be re-posted as a part of a separate paper investigating source effects.