AdaBFL: Multi-Layer Defensive Aggregation for Byzantine-Robust Federated Learning
Researchers propose AdaBFL, a multi-layer defensive adaptive aggregation method for Byzantine-robust federated learning. Federated learning enables collaborative model training without exposing client data, but is vulnerable to poisoning attacks where malicious clients submit corrupted models. Existing Byzantine-robust methods struggle to defend against multiple attack types or require server-side datasets. AdaBFL introduces a three-layer defensive mechanism that adaptively adjusts defense algorithm weights to counter complex attacks. The paper also provides convergence properties of the proposed method.
Key facts
- AdaBFL is a multi-layer defensive adaptive aggregation method for Byzantine-robust federated learning.
- Federated learning is vulnerable to poisoning attacks from malicious clients.
- Existing Byzantine-robust methods have drawbacks in defending against multiple attack types or requiring server datasets.
- AdaBFL uses a three-layer defensive mechanism.
- The method adaptively adjusts weights of defense algorithms.
- The paper provides convergence properties of AdaBFL.
- The research is published on arXiv with ID 2604.27434.
- The method aims to counter complex attacks in federated learning.
Entities
Institutions
- arXiv