LLMs Reinforce Anthropomorphic Projections in Moral Judgment Queries
A recent study featured on arXiv (2604.22764v1) explores how large language models (LLMs) react to user inquiries regarding moral judgments in social disputes, such as determining 'who was wrong?'. The researchers highlight that these inquiries can be seen as implicitly humanizing, which may lead to harmful anthropomorphic assumptions. They evaluate the responses of four prominent general-purpose LLMs, focusing on linguistic, behavioral, and cognitive anthropomorphic indicators. This research presents a new dataset of simulated moral judgment queries. Results indicate that existing LLM responses tend to reinforce implicit humanization, raising concerns about risks like overdependence or misplaced trust. The authors urge for further research to broaden the understanding of anthropomorphism, particularly regarding implicit user-side humanization, and to create solutions for this challenge.
Key facts
- Study examines LLM responses to moral judgment queries in social conflicts
- Identifies such queries as implicitly humanizing with anthropomorphic projections
- Analyzes four major general-purpose LLMs
- Uses linguistic, behavioral, and cognitive anthropomorphic cues
- Contributes a novel dataset of simulated user queries for moral judgments
- Finds LLM responses reinforce implicit humanization
- Highlights risks of overreliance and misplaced trust
- Calls for future work on user-side humanization and solutions
Entities
Institutions
- arXiv