ARTFEED — Contemporary Art Intelligence

VLA-Forget: Unlearning for Embodied Foundation Models

ai-technology · 2026-04-25

A novel unlearning technique named VLA-Forget has been introduced for Vision-Language-Action (VLA) models utilized in robotic manipulation. These foundational models, including OpenVLA, combine visual encoders, cross-modal projectors, and language backbones to forecast tokenized actions for robots. The primary challenge lies in eliminating unsafe, irrelevant, or privacy-sensitive behaviors without compromising perception, language grounding, or action control. In contrast to independent vision or language models, undesirable knowledge in VLA models is spread across various layers, rendering partial unlearning inadequate. VLA-Forget employs a hybrid strategy aimed at simultaneously addressing all pertinent modules. This method seeks to facilitate effective forgetting while reducing utility loss. The findings are published in arXiv:2604.03956v2.

Key facts

  • VLA-Forget targets unlearning in Vision-Language-Action models.
  • OpenVLA-style policies fuse visual encoder, cross-modal projector, and language backbone.
  • Undesirable knowledge is distributed across perception, alignment, and reasoning layers.
  • Partial unlearning on vision or language alone is insufficient.
  • Conventional unlearning baselines may leave residual forgetting or cause utility loss.
  • VLA-Forget is a hybrid unlearning method.
  • The method removes unsafe, spurious, or privacy-sensitive behaviors.
  • Research is published on arXiv with ID 2604.03956v2.

Entities

Institutions

  • arXiv

Sources