Theoretical Framework Compares Invariance Strategies in Machine Learning
A recent study published on arXiv (2605.11008) presents a theoretical model for examining the generalization error associated with group averaging and canonization techniques aimed at attaining invariance in non-invariant backbones. The researchers create a hierarchy indicating that the error limits of canonized models can be, at best, equivalent to those of structurally invariant and group-averaged models, and at worst, comparable to non-invariant baselines. They demonstrate that optimal canonizations yield the best bounds, whereas suboptimal ones align with non-invariant bounds, influenced by the regularity of canonization. This framework is relevant to permutation groups.
Key facts
- arXiv paper 2605.11008
- Title: When and How to Canonize: A Generalization Perspective
- Analyzes generalization error of group averaging and canonization
- Establishes hierarchy of error bounds
- Canonized models at best equal invariant/group-averaged models
- Canonized models at worst equal non-invariant baselines
- Optimal canonizations achieve optimal bounds
- Poor canonizations match non-invariant bounds
Entities
Institutions
- arXiv