LoRA Fine-Tuning Rank Threshold Reduced to r=1 for Binary Classification
A new study challenges the prevailing rank threshold for LoRA fine-tuning, demonstrating that a rank of r=1 suffices for binary classification in the neural tangent kernel regime. The original landscape analysis had prescribed r ≥ 12 for canonical few-shot RoBERTa setups under squared-error loss, based on a sufficient condition r(r+1)/2 > KN. The new work replaces the symmetric Sard-form count with the non-symmetric LoRA manifold dimension, yielding a weaker capacity requirement r(m+n) - r^2 > C*·KN with C* ≈ 1.35 under Gaussian-iid features, satisfied at r=1. Additionally, the Polyak–Łojasiewicz inequality in the cross-entropy setting further supports the reduced rank. The results are presented in three parts, collectively lowering the prescribed rank to 1 for binary classification. The study is published on arXiv as 2605.03724.
Key facts
- Original condition: r(r+1)/2 > KN for absence of spurious local minima under squared-error loss
- Original prescription: r ≥ 12 on canonical few-shot RoBERTa setups
- New condition: r(m+n) - r^2 > C*·KN with C* ≈ 1.35 under Gaussian-iid features
- New condition satisfied at r=1 on canonical setups
- Polyak–Łojasiewicz inequality in cross-entropy setting further supports r=1
- Three results collectively reduce prescribed rank to 1 for binary classification
- Study published on arXiv as 2605.03724
- Focus on binary classification in neural tangent kernel regime
Entities
Institutions
- arXiv