ARTFEED — Contemporary Art Intelligence

GiVA: Gradient-Based Initialization for Vector-Based Fine-Tuning

other · 2026-04-25

GiVA introduces a gradient-based initialization strategy for vector-based adaptation methods in parameter-efficient fine-tuning. Unlike LoRA, which requires low-rank matrices, vector-based methods use even fewer parameters but often need higher ranks to match performance, increasing training costs. GiVA achieves training times comparable to LoRA while maintaining extreme parameter efficiency. Evaluated on natural language understanding, natural language generation, and image classification benchmarks, GiVA consistently outperforms or matches existing vector-based methods and LoRA while reducing rank requirements. The method is detailed in arXiv preprint 2604.21901.

Key facts

  • GiVA is a gradient-based initialization strategy for vector-based adaptation.
  • It achieves training times comparable to LoRA.
  • It maintains extreme parameter efficiency of vector-based methods.
  • Evaluated on NLU, NLG, and image classification benchmarks.
  • Consistently outperforms or matches existing vector-based methods and LoRA.
  • Reduces rank requirements compared to other vector-based methods.
  • Published as arXiv preprint 2604.21901.
  • Addresses the trade-off between parameter efficiency and training cost.

Entities

Institutions

  • arXiv

Sources