HyperAdapt: Efficient High-Rank Adaptation for Foundation Models
A new parameter-efficient fine-tuning method called HyperAdapt has been introduced in a paper on arXiv (2509.18629). HyperAdapt reduces trainable parameters compared to methods like LoRA by applying row- and column-wise scaling via diagonal matrices to a pre-trained weight matrix, achieving a high-rank update with only n+m parameters for an n×m matrix. The method is theoretically bounded in rank and empirically induces high-rank transformations across layers. Experiments on GLUE, arithmetic reasoning, and common sense reasoning benchmarks demonstrate its effectiveness. The paper is authored by researchers and published as a cross-replace announcement.
Key facts
- HyperAdapt is a parameter-efficient fine-tuning method.
- It reduces trainable parameters compared to LoRA.
- Adapts pre-trained weight matrix using row- and column-wise scaling via diagonal matrices.
- Requires only n+m trainable parameters for an n×m matrix.
- Induces high-rank updates.
- Theoretical upper bound on rank is established.
- Empirically induces high-rank transformations across model layers.
- Tested on GLUE, arithmetic reasoning, and common sense reasoning.
Entities
Institutions
- arXiv