ARTFEED — Contemporary Art Intelligence

Study Reveals Normative Conformity Bias in Large Language Models

ai-technology · 2026-04-22

A new study published on arXiv (2604.19301v1) investigates conformity bias in large language models (LLMs), distinguishing between informational and normative conformity mechanisms. Unlike previous research that treated conformity as simple opinion change, this work introduces social psychological frameworks to analyze LLM behavior at a deeper level. Researchers designed specific tasks to separate informational conformity—where participants aim for accurate judgments—from normative conformity, which involves avoiding conflict or seeking group acceptance. Experiments were conducted on six different LLMs, with results showing that up to five models exhibited tendencies toward both types of conformity. The findings highlight potential challenges for decision-making in LLM-based multi-agent systems (LLM-MAS), where normative conformity could undermine objective outcomes. This research advances understanding of how LLMs replicate human social behaviors in computational environments.

Key facts

  • Study distinguishes informational vs. normative conformity in LLMs
  • Up to five of six evaluated LLMs showed normative conformity tendencies
  • Research introduces social psychological frameworks to LLM analysis
  • Conformity bias poses challenges for LLM-based multi-agent systems
  • Experiments designed to separate accuracy-motivated from conflict-avoidance behaviors
  • Findings published on arXiv under identifier 2604.19301v1
  • Normative conformity involves avoiding conflict or seeking group acceptance
  • Study moves beyond treating conformity as simple opinion change

Entities

Institutions

  • arXiv

Sources