Federated Multimodal Unlearning via Entanglement-Aware Anchor Closure
The newly introduced method, EASE, tackles the issue of unlearning in federated multimodal learning (FML). In FML, models are trained using private image-text pairs across decentralized clients; however, joint embedding complicates the unlearning process by intertwining forgotten knowledge within modalities and client gradient subspaces. Current federated unlearning methods do not effectively disconnect the cross-modal reconstruction channel influenced by bilinear coupling or differentiate between update directions for forgotten and retained knowledge. The authors propose an Anchor Principle, noting that forgotten alignments are sustained by three residual anchors from bilinear coupling, subspace entanglement, and ongoing federated updates. EASE, which aims to enhance privacy in decentralized multimodal learning, is detailed in arXiv (2605.00733).
Key facts
- EASE is a method for federated multimodal unlearning.
- Federated Multimodal Learning (FML) trains models on decentralized clients with private image-text pairs.
- Joint embedding entangles forgotten knowledge across modalities and client gradient subspaces.
- Previous approaches do not sever the cross-modal reconstruction channel.
- The Anchor Principle identifies three residual anchors for forgotten alignments.
- Bilateral displacement of visual and language branches closes the cross-modal reconstruction channel.
- The method disentangles forget-relevant and retain-relevant gradient directions.
- The paper is on arXiv with ID 2605.00733.
Entities
Institutions
- arXiv