Robust Federated Learning via Loss-Based Client Clustering
A novel strategy for resilient federated learning (FL) against Byzantine attacks incorporates loss-based client clustering. This technique operates under the assumption of a reliable server equipped with a small auxiliary dataset and necessitates only two trustworthy participants (the server and one client) to be effective, without needing to know the count of malicious clients in advance. Theoretical evaluations indicate that it achieves bounded optimality gaps, even amidst severe attacks. Experimental results reveal that it significantly surpasses traditional and robust FL benchmarks—Mean, Trimmed Mean, Median, Krum, Multi-Krum—across various attack methods, including label flipping. The research can be accessed on arXiv (2508.12672).
Key facts
- Federated learning enables collaborative model training without sharing private data.
- The approach considers Byzantine attacks on clients with a trusted server.
- The server has a trustworthy side dataset.
- Only two honest participants (server and one client) are required.
- No prior knowledge of the number of malicious clients is needed.
- Theoretical analysis shows bounded optimality gaps under strong Byzantine attacks.
- Outperforms baselines: Mean, Trimmed Mean, Median, Krum, Multi-Krum.
- Tested under label flipping and other attack strategies.
Entities
Institutions
- arXiv