ARTFEED — Contemporary Art Intelligence

Coward: New Proactive Backdoor Detection for Federated Learning

other · 2026-05-07

A new proactive backdoor detection method called Coward has been proposed to address limitations in existing federated learning defenses. Backdoor attacks in FL involve malicious clients uploading poisoned updates to compromise the global model. Current detection techniques are either passive or proactive, but both have practical flaws: passive methods fail under non-i.i.d. data distributions and random client participation, while proactive methods suffer from out-of-distribution bias due to reliance on backdoor coexistence effects. Coward is inspired by multi-backdoor collision effects, where consecutively planted distinct backdoors suppress earlier ones, enabling more reliable detection. The method is detailed in a paper on arXiv (2508.02115).

Key facts

  • Coward is a proactive backdoor detection method for federated learning.
  • It addresses limitations of passive and existing proactive detection methods.
  • Passive methods are disrupted by non-i.i.d. data and random client participation.
  • Existing proactive methods are misled by out-of-distribution bias.
  • Coward is based on multi-backdoor collision effects.
  • Consecutively planted distinct backdoors suppress earlier ones.
  • The paper is available on arXiv with ID 2508.02115.
  • The method aims to improve security in federated learning systems.

Entities

Institutions

  • arXiv

Sources