ARTFEED — Contemporary Art Intelligence

Privacy-Preserving LLM Personalization via Composable Adapters

ai-technology · 2026-04-25

A recent study published on arXiv (2604.21571) introduces a three-layer framework aimed at enabling privacy-preserving personalization for large language models. This method separates personal information from shared weights by utilizing a foundational static model, adaptable domain-specific LoRA modules, and individual user proxy artifacts. The removal of a proxy leads to deterministic unlearning. Tested on Phi-3.5-mini and Llama-3.1-8B, the technique demonstrates user-specific differentiation, reverting to baseline after proxy deletion (KL divergence ~0.21 nats, 82-89% verification success rate) while ensuring minimal cross-user contamination. Additionally, this architecture effectively addresses risks associated with model inversion, membership inference, and data extraction attacks.

Key facts

  • Paper arXiv:2604.21571 proposes a three-layer architecture for privacy-preserving LLM personalization.
  • Architecture uses static base model, composable domain-expert LoRA adapters, and per-user proxy artefacts.
  • Deletion of proxy artefacts constitutes deterministic unlearning.
  • Evaluated on Phi-3.5-mini and Llama-3.1-8B models.
  • KL divergence of approximately 0.21 nats after proxy removal.
  • 82-89% verification pass rate for unlearning.
  • Near-zero cross-user contamination.
  • Mitigates model inversion, membership inference, and training-data extraction attacks.

Entities

Institutions

  • arXiv

Sources