Misinformation disrupts our information ecosystem, adversely affecting individuals and straining social cohesion and democracy. Understanding what causes online (mis)information to (re)appear is crucial for fortifying our information ecosystem. We analyzed a large-scale Twitter (now “X”) dataset of about 2 million tweets across 123 fact-checked stories. Previous research suggested a falsehood effect (false information reappears more frequently) and an ambiguity effect (ambiguous information reappears more frequently). However, robust indicators for their existence remain elusive. Using polynomial statistical modeling, we compared a falsehood model, an ambiguity model, and a dual effect model. The data supported the dual effect model (13.76 times as likely as a null model), indicating both ambiguity and falsehood promote information reappearance. However, evidence for ambiguity was stronger: the ambiguity model was 6.6 times as likely as the falsehood model. Various control checks affirmed the ambiguity effect, while the falsehood effect was less stable. Nonetheless, the best-fitting model explained <7% of the variance, indicating that (i) the dynamics of online (mis)information are complex and (ii) falsehood effects may play a smaller role than previous research has suggested. These findings underscore the importance of understanding the dynamics of online (mis)information, though our focus on fact-checked stories may limit the generalizability to the full spectrum of information shared online. Even so, our results can inform policymakers, journalists, social media platforms, and the public in building a more resilient information environment, while also opening new avenues for research, including source credibility, cross-platform applicability, and psychological factors.