ARTFEED — Contemporary Art Intelligence

Proposing an Epistemic Constitution to Counter AI Coherence Bias

publication · 2026-04-24

A recent study published on arXiv (2601.14295v3) contends that large language models (LLMs) function based on unexamined epistemic policies that lead to inherent biases. The author highlights source attribution bias, where advanced models maintain identity-stance coherence by penalizing arguments linked to sources with ideological positions that contradict the argument's content. This bias fails under systematic evaluation, showing that models perceive source sensitivity as a defect to be minimized rather than a capability to be harnessed. The paper introduces an "epistemic constitution"—explicit, debatable meta-norms to guide AI belief formation and expression. It differentiates between two constitutional frameworks: the Platonic, which focuses on formal correctness and source-independence, and the Libera, which remains abstract. The study advocates for transparency and contestability in AI reasoning.

Key facts

  • Paper arXiv:2601.14295v3 addresses epistemic policies in LLMs.
  • Author identifies source attribution bias in frontier models.
  • Models enforce identity-stance coherence, penalizing conflicting attributions.
  • Bias disappears under systematic testing conditions.
  • Paper proposes an explicit epistemic constitution for AI.
  • Two constitutional approaches: Platonic and Libera.
  • Platonic approach mandates formal correctness and source-independence.
  • Research highlights need for contestable meta-norms in AI reasoning.

Entities

Institutions

  • arXiv

Sources