FedCoLLM: Federated Co-Tuning Framework for Large and Small Language Models
FedCoLLM is a novel federated framework designed for co-tuning Large Language Models (LLMs) and Small Language Models (SLMs). The approach adaptively transfers server-side LLM knowledge to client SLMs while enriching the LLM with domain insights from clients. It uses lightweight adapters with SLMs to facilitate knowledge exchange while preserving data privacy and minimizing computational and communication overhead. The framework was evaluated using various public LLMs and SLMs across domain-specific tasks. The research is published on arXiv under identifier 2411.11707.
Key facts
- FedCoLLM is a parameter-efficient federated framework for co-tuning LLMs and SLMs.
- It adaptively transfers server-side LLM knowledge to client SLMs.
- It enriches the LLM with domain insights from clients.
- Uses lightweight adapters with SLMs for knowledge exchange.
- Respects data privacy and minimizes computational and communication overhead.
- Evaluated using various public LLMs and SLMs across domain-specific tasks.
- Published on arXiv under identifier 2411.11707.
- Announce type: replace-cross.
Entities
Institutions
- arXiv