ARTFEED — Contemporary Art Intelligence

LLMs Should Not Yet Be Credited with Decision Explanation

publication · 2026-05-06

A recent position paper available on arXiv (2605.01164) contends that large language models should not yet be recognized for their ability to explain decisions. The authors caution that ongoing research increasingly interprets accurate predictions of behavior, plausible justifications, and reasoning linked to outcomes as proof that LLMs clarify the reasons behind human decisions, which may lead to an unwarranted shift in defining progress in human decision modeling. The document identifies three distinct assertions with varying levels of evidence: decision prediction, rationale generation, and decision explanation. It claims that while current data supports prediction and rationale generation, as well as some explanatory hypothesis formation, it fails to differentiate explanation from rationalization that supports predictions. The authors suggest a bridge standard for granting decision-explanation credit, necessitating stronger assertions to identify explanatory targets and differentiate from weaker options.

Key facts

  • arXiv paper 2605.01164 argues LLMs should not be credited with decision explanation.
  • Recent work treats prediction and plausible rationales as evidence of explanation.
  • Three claims distinguished: decision prediction, rationale generation, decision explanation.
  • Evidence supports prediction and rationale generation, not explanation.
  • Proposes a bridge standard for decision-explanation credit.
  • Paper warns of premature redefinition of explanatory progress.
  • Authors are unnamed in the abstract.
  • Paper is a position paper, not empirical study.

Entities

Institutions

  • arXiv

Sources