ARTFEED — Contemporary Art Intelligence

GLoRA: Gauge-Aware Federated LoRA for LLM Adaptation

ai-technology · 2026-05-11

A novel approach known as GLoRA tackles a critical issue in federated LoRA utilized for large language models. Conventional federated LoRA combines low-rank adaptation factors from various clients; however, these factors are dependent on the gauge. This means that a single update can be depicted through countless factorizations, leading to inconsistencies in factor-level aggregation. In contrast, GLoRA determines a consensus update subspace derived from client projectors and consolidates updates within shared reference coordinates, maintaining the semantic integrity of the low-rank update. Additionally, it accommodates clients with varying capacities through a rank-compatible readout. The research is available on arXiv with the ID 2605.06733.

Key facts

  • Federated LoRA enables parameter-efficient LLM adaptation under decentralized data.
  • Direct averaging of LoRA factors is representation-dependent due to gauge equivalence.
  • GLoRA estimates a consensus update subspace from client projectors.
  • GLoRA aggregates client updates in shared reference coordinates.
  • GLoRA represents semantic update aggregation entirely in low-rank form.
  • GLoRA supports heterogeneous client capacities via rank-compatible readout.
  • The paper is available on arXiv with ID 2605.06733.
  • The method is proposed to fix semantic mismatch in existing federated LoRA aggregation.

Entities

Institutions

  • arXiv

Sources