ARTFEED — Contemporary Art Intelligence

Segment-Level Robustness Framework for LoRA-Tuned LLMs

other · 2026-05-06

The newly introduced framework, S²R², tackles the issue of large language models being sensitive to prompt variations by implementing segment-level robustness during LoRA fine-tuning. In contrast to current techniques that maintain consistency across entire sequences, S²R² breaks down both clean and altered outputs into semantic segments. These segments are aligned through optimal transport, with penalties applied to those exhibiting the most significant meaning shifts. Additionally, the framework features an adapter-stability regularizer that reallocates attention at the segment level, utilizing LoRA norm control to mitigate evidence shifts amplified by perturbations. A PAC-Bayesian complexity perspective clarifies how managing adapter growth aids in transfer. This research is available on arXiv with the ID 2605.01605.

Key facts

  • S²R² is a segment-level framework for robust LoRA fine-tuning.
  • It decomposes clean and perturbed generations into semantic segments.
  • Segments are aligned with an optimal-transport objective.
  • Penalizes segments with the largest meaning drift.
  • Includes an adapter-stability regularizer motivated by segment-level attention reallocation.
  • Uses LoRA norm control as a proxy for limiting perturbation-amplified evidence shifts.
  • A PAC-Bayesian complexity view explains why controlling adapter growth supports transfer.
  • Published on arXiv with ID 2605.01605.

Entities

Institutions

  • arXiv

Sources