This article explores how Large Language Model (LLM) chatbots regulate moral values when they refuse ‘unsafe’ requests from users. It applies corpus-based discourse analysis to examine how the chatbots employ tenor resources of positioning, tuning, and orienting in the rhetoric of their refusals. This method is informed by Systemic Functional Linguistics, in particular the discourse semantic system of appraisal, which models evaluative meaning. Despite their contrite openings, chatbot refusals tend to raise stakes in terms of tenor. They deploy prosodies of propriety targeted at moral and taboo stances and behaviours. This rhetoric of oppositioning involves encapsulating key values into iconised attitudes as the chatbots advise users about what is ‘important’ and ‘not appropriate’.
