Scientists across disciplines increasingly rely on artificial intelligence to scan, summarize, and interpret research articles. While these tools offer speed and clarity, their perceived effectiveness exposes deeper, systemic issues in science. Specifically, we argue that the popularity of AI highlights two key problems: the historical failure to communicate research clearly, and the persistent overestimation that other scientists fully understand published research. Rather than an isolated trend, AI use reflects how academic writing often excludes more than it informs. Here, we trace how institutional norms, training, incentives, and culture reinforce AI reliance, and outline reforms to promote more inclusive, comprehensible science.
