Neural Networks Gain Generalized SVD Representation
A recent publication on arXiv (2605.06938) builds upon the Generalized Singular Value Decomposition (GSVD) framework established by Brown et al. (2025), applying it to contemporary neural network designs. The researchers demonstrate that a majority of these networks can be expressed in a generalized SVD format, maintaining left-invertibility prior to the final linear layer without altering input-output dynamics. Additionally, the left-invertible nonlinear segment can be adjusted to preserve norms, ensuring that changes in the embedding space align proportionately with alterations in input. This enables a calibration of distances in feature space to those in input space. The paper introduces a data-driven method for estimating this representation from trained models and suggests a model design that supports the decomposition. A proof-of-concept shows that the derived representation can detect input perturbations.
Key facts
- Paper extends GSVD theory of Brown et al. (2025) to neural networks
- Most modern neural architectures admit a generalized SVD representation
- Networks are left-invertible before final linear layer with no behavior change
- Left-invertible nonlinear portion can be made norm preserving
- Perturbations in embedding correspond proportionally to input changes
- Data-driven algorithm estimates representation from trained models
- Proposed model architecture facilitates the decomposition
- Proof-of-concept shows representation can identify input perturbations
Entities
Institutions
- arXiv