ARTFEED — Contemporary Art Intelligence

CIKA: Using LLMs as Causal Simulators for Mathematical Reasoning

ai-technology · 2026-05-11

CIKA (Causal Intervention for Knowledge Activation) is an innovative framework that employs large language models as interventional simulators to pinpoint concepts that causally influence accurate mathematical solutions. This approach introduces an Interventional Capability Probe (ICP), which assesses an LLM's ability to utilize a specific concept, distinguishing this from mere knowledge possession. By externally manipulating the concept state to "mastered" through prompts, CIKA effectively differentiates confounding factors from problem difficulty, a challenge for observational techniques. In a study involving 67 screened problems, the ICP of the leading concept demonstrated an improvement of +0.219. This research, published on arXiv (2605.07600), tackles misleading associations caused by confounders like problem difficulty in current MCTS-based or causal graph-guided methods.

Key facts

  • CIKA stands for Causal Intervention for Knowledge Activation
  • Uses LLM as interventional simulator via prompts
  • Formalizes Interventional Capability Probe (ICP)
  • ICP diagnoses concept usage vs. knowledge possession
  • Intervention sets concept state to 'mastered'
  • Separates confounding from problem difficulty
  • Tested on 67 screened problems
  • Top-ranked concept ICP achieved +0.219 improvement
  • Published on arXiv with ID 2605.07600
  • Addresses spurious associations in MCTS and causal graph methods

Entities

Institutions

  • arXiv

Sources