Decision Framework for Information-Theoretic Measures in AI
A new paper on arXiv (2604.23716) presents a practical decision framework for seven information-theoretic (IT) measures used in artificial intelligence: entropy, cross-entropy, mutual information, transfer entropy, integrated information (Phi), effective information (EI), and autonomy. The framework addresses three prescriptive questions per measure: the question it answers and its AI context, the appropriate estimator for data type and dimensionality, and the most dangerous failure modes. The authors note that measure selection is often decoupled from estimator assumptions and safe inferential claims. Entropy drives decision-tree splits and uncertainty quantification; cross-entropy is the default classification loss; mutual information underpins representation learning and feature selection; transfer entropy reveals directed influence in dynamical systems. The second family—Phi, EI, and autonomy—characterizes agent complexity. The paper aims to guide practitioners in choosing and applying these measures correctly.
Key facts
- Paper ID: arXiv:2604.23716
- Announce type: new
- Covers seven IT measures: entropy, cross-entropy, mutual information, transfer entropy, Phi, EI, autonomy
- Framework organized around three prescriptive questions per measure
- Entropy used in decision-tree splits and uncertainty quantification
- Cross-entropy is default classification loss
- Mutual information used in representation learning and feature selection
- Transfer entropy reveals directed influence in dynamical systems
- Phi, EI, and autonomy characterize agent complexity
- Measure selection often decoupled from estimator assumptions and failure modes
Entities
Institutions
- arXiv