ARTFEED — Contemporary Art Intelligence

Cognitive Model Reveals Why Users Struggle with AI Explanations

ai-technology · 2026-05-01

A recent study published on arXiv (2604.27354) explores the reasons behind the limited success of explainable AI (XAI) in enhancing user comprehension. The research concentrated on structured data and analyzed reasoning approaches for various XAI techniques—namely, none, feature importance, and feature attribution—within a forward simulation task. Strategies were derived from an initial formative study, followed by decision collection in a summative study. Through cognitive modeling, the researchers applied fundamental processes and discovered that their models aligned more closely with human decision-making than standard machine learning proxies, providing valuable insights into effective reasoning approaches.

Key facts

  • Study examines reasoning strategies for XAI methods on structured data
  • Methods tested: none, feature importance, feature attribution
  • Task: anticipating AI decisions (forward simulation)
  • Data from formative and summative user studies
  • Cognitive models outperformed baseline ML proxies
  • Published on arXiv with ID 2604.27354
  • Focus on human cognition to explain XAI effectiveness
  • Goal: improve user understanding and decisions with AI

Entities

Institutions

  • arXiv

Sources