ARTFEED — Contemporary Art Intelligence

FedProxy Framework Introduces Proxy SLMs for Federated LLM Fine-Tuning

ai-technology · 2026-04-22

FedProxy, a novel federated adaptation framework, tackles the challenges of intellectual property protection, client privacy, and performance degradation on diverse data during the fine-tuning of Large Language Models. Current techniques, such as Offsite-Tuning (OT), allow clients to train only lightweight adapters to safeguard LLM IP, but they exhibit a critical performance limitation, creating a notable disparity compared to centralized training. Instead of these weak adapters, FedProxy introduces a robust Proxy Small Language Model (SLM), derived from the proprietary LLM, acting as a precise surrogate for collaborative fine-tuning. This framework effectively addresses the trilemma through a three-stage process: Efficient Representation via server-guided compression, Robust Optimization with heterogeneity-aware fusion, and Secure Deployment ensuring privacy. This method enhances performance while safeguarding model IP and client data in federated learning settings. The findings were shared on arXiv with the identifier 2604.19015v1, classified as a cross announcement.

Key facts

  • FedProxy is a new federated adaptation framework for Large Language Models
  • It addresses IP protection, client privacy, and performance loss on heterogeneous data
  • Replaces weak adapters with a unified Proxy Small Language Model compressed from proprietary LLMs
  • Uses three-stage architecture: Efficient Representation, Robust Optimization, Secure Deployment
  • Published on arXiv with identifier 2604.19015v1
  • Announcement type is cross
  • Existing methods like Offsite-Tuning suffer from performance bottlenecks
  • Proxy SLM serves as high-fidelity surrogate for collaborative fine-tuning

Entities

Institutions

  • arXiv

Sources