Social Science Research Council Research AMP Just Tech
Citation

Too human to model: the uncanny valley of large language models in simulating human systems

Author:
Zeng, Yongchao; Brown, Calum; Rounsevell, Mark
Publication:
npj Complexity
Year:
2026

Large language models (LLMs) have been increasingly used to simulate human behaviour because of their ability to generate contextually coherent dialogues. Such abilities can enhance the realism of models. However, the pursuit of realism is not necessarily compatible with the epistemic foundation of modelling. We explore when LLM agents can be too ‘human’ to model, i.e., when they are too expressive, detailed and intractable to be consistent with the abstraction, simplification, and interpretability typically demanded by modelling. Through a model-building thought experiment, we uncover five core dilemmas: a temporal resolution mismatch between natural conversation and abstract time steps; the need for intervention in agent conversations without undermining spontaneous outputs; the temptation to introduce rules while maintaining conversational naturalness; the tension between role consistency and role evolution; and the challenge of understanding emergence. These dilemmas lead LLM agents to an “uncanny valley”: more realistic than rule-based agents but recognisably unhuman.