Fact-checks and corrections of falsehoods have emerged as effective ways to counter misinformation online. But in contexts with encrypted messaging applications (EMAs), corrections must necessarily emanate from peers. Are such social corrections effective? If so, how substantiated do corrective messages need to be? To answer these questions, we evaluate the effect of different types of social corrections on the persistence of misinformation in India (N≈5,100). Using an online experiment, we show that social corrections substantially reduce beliefs in misinformation, including in beliefs deeply anchored in salient group identities. Importantly, these positive effects are not systematically attenuated by partisan motivated reasoning, highlighting a striking difference from Western contexts. We also find that the presence of a correction matters more relative to how sophisticated this correction is: substantiating a correction with a source only improves its effect in a minority of cases; besides, when social corrections are effective, citing a source does not drastically improve the size of their effect. These results have implications for both users and platforms and speak to countering misinformation in developing countries that rely on private messaging apps.