ARTFEED — Contemporary Art Intelligence

LLMs Rarely Admit Uncertainty in Ambiguous Social Situations

ai-technology · 2026-04-29

A new study from arXiv (2604.23942) examines how large language models (LLMs) handle ambiguous social interactions across four domains: early romantic relationships, teacher-student dynamics, workplace hierarchies, and friendships. Researchers tested GPT, Claude, and Gemini with 72 responses. Only 9 (12.5%) preserved genuine uncertainty; the remaining 87.5% produced interpretive closure through narrative alignment, reversal, normative advice, or hedged language that still supported a single conclusion. The study also found that first-person narratives more often elicited alignment, while third-person accounts led to different closure patterns. The findings highlight a tendency in LLMs to resolve ambiguity rather than acknowledge it, raising questions about their use in social interpretation.

Key facts

  • Study on arXiv: 2604.23942
  • Four domains: romantic relationships, teacher-student, workplace, friendships
  • 72 responses from GPT, Claude, Gemini
  • Only 9 (12.5%) preserved uncertainty
  • 87.5% produced interpretive closure
  • Closure pathways: narrative alignment, reversal, normative advice, hedged language
  • First-person accounts more often elicited alignment
  • Third-person accounts led to different closure patterns

Entities

Institutions

  • arXiv

Sources