Recent advancements in generative AI have raised widespread concern about the use of this technology to spread audio and visual misinformation. In response, there has been a major push among policymakers and technology companies to label AI-generated media appearing online. It remains unclear, however, what types of labels are most effective for this purpose. Here, we evaluate two (potentially complementary) strategies for labeling AI-generated content online: (i) a process-based approach, aimed at clarifying how content was made and (ii) a harm-based approach, aimed at highlighting content’s potential to mislead. Using two preregistered survey experiments focused on misleading, AI-generated images (total n = 7,579 Americans), we assess the consequences of these different labeling strategies for viewers’ beliefs and behavioral intentions. Overall, we find that all of the labels we tested significantly decreased participants’ belief in the presented claims. However, in both studies, labels that simply informed participants that content was generated using AI tended to have little impact on respondents’ stated likelihood of engaging with their assigned post. Together, these results shed light on the relative advantages and disadvantages of different approaches to labeling AI-generated media online.