Supervised Learning's Geometric Blind Spot: Theory and Repair
A new paper on arXiv proves that supervised learning inherently suffers from a geometric blind spot: any encoder minimizing supervised loss must retain sensitivity to label-correlated but nuisance directions in training data. This is a mathematical necessity, not a failure of current methods. The finding unifies four previously separate empirical phenomena: non-robust features, texture bias, corruption fragility, and the robustness-accuracy tradeoff. Adversarial vulnerability is one consequence. The authors introduce the Trajectory Deviation Index (TDI) as a diagnostic and propose a minimal repair.
Key facts
- Empirical risk minimisation imposes a necessary geometric constraint on learned representations.
- Any encoder that minimises supervised loss must retain non-zero Jacobian sensitivity in label-correlated but nuisance directions.
- This is a mathematical consequence of the supervised objective itself.
- The geometric blind spot holds across proper scoring rules, architectures, and dataset sizes.
- The theorem unifies four lines of prior empirical work: non-robust predictive features, texture bias, corruption fragility, and robustness-accuracy tradeoff.
- Adversarial vulnerability is one consequence of this broader structural fact.
- The paper introduces the Trajectory Deviation Index (TDI).
- The paper proposes a minimal repair for the geometric blind spot.
Entities
Institutions
- arXiv