As the widespread adoption of conversational artificial intelligence (AI) systems has raised concerns about social bias, especially towards vulnerable groups, this study explores how these systems respond to and regulate discriminatory content. Through a cross-lingual, cross-platform comparative analysis, this study systematically examines six leading conversational AI systems: ChatGPT, Gemini, Llama, Ernie Bot, ChatGLM, and Tongyi. Using a mixed-method approach, the study reveals that refusal sensitivity and answering strategies vary significantly across systems, languages, and topics. This paper provides new insights into the moderation strategies of conversational AI systems and introduces a framework for auditing AI content moderation concerning social discrimination.
