AS-LoRA: Adaptive Selection of LoRA Components in Privacy-Preserving Federated Learning
A new framework called AS-LoRA addresses aggregation errors in differentially private federated fine-tuning of large models using LoRA. The errors stem from LoRA's multiplicative structure and are worsened by DP noise, harming stability and accuracy. Existing methods apply a single update mode uniformly across layers and rounds, ignoring structural asymmetry between LoRA factors and round-wise dynamics. AS-LoRA introduces three adaptive axes: layer-wise freedom (each layer selects its active component independently), round-wise adaptivity (selection updates over communication rounds), and a curvature-aware score from a second-order loss approximation. Theoretically, AS-LoRA eliminates reconstruction-error floor of layer-tied schedules, accelerates convergence, and implicitly biases solutions. The paper is available on arXiv.
Key facts
- AS-LoRA is an adaptive framework for privacy-preserving federated learning.
- It addresses aggregation error caused by LoRA's multiplicative structure.
- DP noise amplifies the aggregation error, degrading stability and accuracy.
- Existing remedies apply a single update mode uniformly across layers and rounds.
- AS-LoRA has three axes: layer-wise freedom, round-wise adaptivity, and curvature-aware score.
- The curvature-aware score is derived from a second-order approximation of the loss.
- AS-LoRA eliminates the reconstruction-error floor of layer-tied schedules.
- The paper is published on arXiv with ID 2605.05769.
Entities
Institutions
- arXiv