ARTFEED — Contemporary Art Intelligence

Adversarial Malware Evasion via Similarity-Constrained Perturbations

ai-technology · 2026-04-25

A new research paper on arXiv (2604.21310) investigates whether attackers can generate adversarial malware samples that evade deep learning-based detection while avoiding detection by drift monitoring systems. The proposed method creates targeted adversarial examples in the classifier's standardized feature space, using similarity regularizers to maintain distributional similarity with clean malware. The optimization objective balances targeted misclassification with minimizing drift signals. The study highlights critical limitations of deep learning models in non-stationary environments where malware and detection systems evolve.

Key facts

  • Paper ID: arXiv:2604.21310
  • Announce type: cross
  • Focuses on adversarial evasion in non-stationary malware detection
  • Proposes similarity-constrained perturbations to minimize drift signals
  • Uses standardized feature space for targeted adversarial examples
  • Balances misclassification with drift signal minimization
  • Addresses real-world limitations of deep learning models

Entities

Institutions

  • arXiv

Sources