ARTFEED — Contemporary Art Intelligence

CERSA: Memory-Efficient Fine-Tuning via Principal Subspace Adaptation

ai-technology · 2026-05-12

A novel fine-tuning technique named Cumulative Energy-Retaining Subspace Adaptation (CERSA) has been introduced to minimize memory usage during the adaptation of large pre-trained models. In contrast to existing parameter-efficient fine-tuning (PEFT) approaches like LoRA, which utilize low-rank updates that do not fully capture weight modification characteristics, CERSA employs singular value decomposition (SVD) to focus on the principal components that account for 90% to 95% of spectral energy. By fine-tuning low-rank representations from this principal subspace, CERSA effectively decreases memory consumption while bridging the performance gap associated with LoRA. This method also eliminates the need to store complete frozen weights, making it ideal for environments with limited resources. The research can be found on arXiv with the identifier 2605.08174.

Key facts

  • CERSA stands for Cumulative Energy-Retaining Subspace Adaptation.
  • It uses singular value decomposition (SVD) to retain principal components with 90% to 95% spectral energy.
  • The method reduces memory consumption compared to LoRA and other PEFT methods.
  • It addresses the performance gap caused by low-rank updates in LoRA.
  • The paper is published on arXiv with ID 2605.08174.
  • CERSA fine-tunes low-rank representations from the principal subspace.
  • It eliminates the need to store full frozen weights.
  • The approach targets memory-efficient fine-tuning of large pre-trained models.

Entities

Institutions

  • arXiv

Sources