Study Reveals How Warmth and Empathy Drive Anthropomorphism and Trust in LLM Interactions
A recent investigation into interactions between humans and large language models (LLMs) has found that both warmth and cognitive empathy are key factors in shaping perceptions of anthropomorphism, trust, similarity, relational closeness, frustration, and usefulness. This research, documented as preprint 2604.15316v1 on arXiv, involved 115 participants who interacted with chatbots designed with varying levels of warmth (friendliness), competence (capability, coherence), and empathy (both cognitive and affective). While warmth and cognitive empathy impacted all assessed outcomes, competence was linked to every outcome except for anthropomorphism. Affective empathy mainly influenced relational perceptions but did not affect epistemic results. Sub-analyses revealed that topics of personal significance, such as relationships, triggered stronger anthropomorphic reactions. The abstract emphasizes the growing inclination to attribute human-like qualities to LLMs as they become more integrated into everyday life.
Key facts
- Study analyzed over 2,000 human-LLM interactions
- 115 participants engaged with systematically varied chatbots
- Chatbots varied in warmth, competence, and empathy dimensions
- Warmth and cognitive empathy predicted all measured outcomes
- Competence predicted all outcomes except anthropomorphism
- Affective empathy predicted relational measures but not epistemic outcomes
- Published as arXiv preprint 2604.15316v1
- More subjective topics elicited stronger anthropomorphic responses
Entities
Institutions
- arXiv