FeDa4Fair: Benchmarking Fairness in Federated Learning
FeDa4Fair, a newly introduced benchmarking framework, tackles the evaluation of fairness in Federated Learning (FL). While FL facilitates collaborative model training with privacy protection, it often creates an 'illusion of fairness,' where global models seem equitable on average but can still foster discrimination at the client level. Current fairness-enhancing solutions in FL generally address biases related to a single binary sensitive attribute, overlooking two practical situations: attribute-bias (clients showing unfairness towards various sensitive attributes) and value-bias (clients displaying conflicting biases regarding different values of the same attribute). As the first framework of its kind, FeDa4Fair is designed to rigorously test fairness methods in these diverse scenarios. It is detailed in a paper available on arXiv (2506.21095) and offers a library for reliable and reproducible fairness research in FL.
Key facts
- FeDa4Fair is a benchmarking framework for fairness evaluation in Federated Learning.
- It addresses the 'illusion of fairness' where global models appear fair but discriminate at the client level.
- Existing FL fairness solutions typically handle only a single binary sensitive attribute.
- FeDa4Fair covers attribute-bias and value-bias scenarios.
- The framework is introduced in arXiv paper 2506.21095.
- It aims to support robust and reproducible fairness research.
- FeDa4Fair is the first framework of its kind for heterogeneous fairness conditions.
- The paper was announced as a replace-cross on arXiv.
Entities
Institutions
- arXiv