JACTUS: A Unified Framework for Joint Adaptation and Compression of Pretrained Models
Researchers have introduced a new method called JACTUS, which stands for Joint Adaptation and Compression with a Task-aware Union of Subspaces. This approach merges parameter-efficient fine-tuning (PEFT) with low-rank compression into a single framework. Unlike traditional techniques that first compress and then fine-tune—often leading to a mismatch between the compressed areas and tasks—JACTUS tackles this problem head-on. It does this by gathering input and pre-activation gradient covariances from a small calibration set, forming an orthogonal union with the pretrained weight subspace. By allocating rank globally based on the marginal gains per parameter, it trains a compact core matrix, enhancing both efficiency and performance. You can find this research on arXiv with ID 2605.02829.
Key facts
- JACTUS stands for Joint Adaptation and Compression with a Task-aware Union of Subspaces.
- It unifies parameter-efficient fine-tuning and low-rank compression into a single framework.
- Traditional methods compress first then fine-tune, causing potential misalignment.
- JACTUS estimates input and pre-activation gradient covariances from a small calibration set.
- It forms an orthogonal union of these covariances with the pretrained weight subspace.
- Rank is allocated globally by marginal gain per parameter.
- Only a compact core matrix is trained.
- The paper is available on arXiv with ID 2605.02829.
Entities
Institutions
- arXiv