ARTFEED — Contemporary Art Intelligence

MoLF: A Unified Framework Combining LoRA and Full Fine-Tuning for LLMs

ai-technology · 2026-05-11

A new arXiv paper (2605.07111) proposes Mixture of LoRA and Full (MoLF) Fine-Tuning, a framework that dynamically routes gradient updates between Low-Rank Adaptation (LoRA) and Full Fine-Tuning (FFT) at the optimizer level. The authors argue that both LoRA and FFT have structural limitations when used alone, and that MoLF allows continuous navigation between the two regimes. Empirical evaluations were conducted on SQL, Medical QA, and Counterfactual Knowledge tasks using Gemma-3-1B, Qwen2.5-1.5B, and Qwen2.5-3B models. The paper highlights that while FFT offers representational plasticity for high-entropy knowledge injection, LoRA can match or surpass FFT performance due to low-rank update constraints and regularization benefits. MoLF aims to ensure exact gradient signals are used appropriately.

Key facts

  • arXiv paper 2605.07111 introduces MoLF (Mixture of LoRA and Full) Fine-Tuning.
  • MoLF dynamically routes updates between FFT and LoRA at the optimizer level.
  • Evaluated on SQL, Medical QA, and Counterfactual Knowledge tasks.
  • Models used: Gemma-3-1B, Qwen2.5-1.5B, Qwen2.5-3B.
  • LoRA can match or surpass FFT performance due to low-rank updates and regularization.
  • FFT provides representational plasticity for high-entropy knowledge injection.
  • Relying solely on either LoRA or FFT is structurally limited.
  • MoLF enables continuous navigation between training regimes.

Entities

Institutions

  • arXiv

Sources