ARTFEED — Contemporary Art Intelligence

RETROFIT: Continual Learning with Controlled Forgetting for Binary Security

other · 2026-04-25

RETROFIT is an innovative technique designed to tackle the decline in performance of deep learning models used for binary security analysis amidst changing threat environments. This method facilitates ongoing learning without the need for past data by managing knowledge retention through a process of controlled forgetting. It combines models that have been previously trained with those that have been recently fine-tuned using retrospective-free parameter merging, limiting adjustments to low-rank and sparse subspaces to maintain approximate orthogonality. This enables the model to respond to emerging threats while retaining existing knowledge. The research is available on arXiv with the identifier 2511.11439.

Key facts

  • RETROFIT is a method for continual learning in binary security.
  • It does not require historical data replay.
  • Controlled forgetting is achieved through parameter merging and subspace constraints.
  • The approach consolidates legacy and emergent knowledge.
  • Published on arXiv with ID 2511.11439.
  • Addresses performance degradation in evolving threat landscapes.
  • Uses retrospective-free parameter merging.
  • Constrains parameter changes to low-rank and sparse subspaces.

Entities

Institutions

  • arXiv

Sources