ARTFEED — Contemporary Art Intelligence

AI Models Struggle with Abstract Causal Transfer Compared to Humans

ai-technology · 2026-04-29

A recent investigation featured on arXiv (2604.24062) explores the ability of Large Language Models (LLMs) and Vision Language Models (VLMs) to transfer abstract causal frameworks across different contexts, a crucial element of human cognition. Researchers employed the OpenLock paradigm, which necessitates the sequential identification of Common Cause (CC) and Common Effect (CE) structures. The findings indicate that AI models often demonstrate delayed or nonexistent transfer capabilities. Even those that perform well still need significant initial exposure to their environments, in contrast to humans, who can transfer knowledge with minimal exposure. Additionally, classical Reinforcement Learning (RL) agents experience severe failures. This research underscores a significant disparity between human and AI causal learning processes.

Key facts

  • Study published on arXiv under identifier 2604.24062
  • Investigates causal transfer in LLMs and VLMs using OpenLock paradigm
  • Humans achieve transfer after minimal exposure
  • Classical RL agents fail catastrophically
  • AI models show delayed or absent transfer
  • Successful models require initial environmental exposure
  • Focus on Common Cause (CC) and Common Effect (CE) structures
  • Open question whether AI possesses human-like causal transfer mechanisms

Entities

Institutions

  • arXiv

Sources