Taxonomy of Uncertainty in Sequential Fair ML Decisions
A recent study published on arXiv (2604.21711) presents a classification of uncertainty in sequential decision-making related to fair machine learning. The researchers highlight that, although fairness has been extensively examined in supervised learning, numerous practical ML applications operate in an online and sequential manner. In these contexts, earlier decisions influence subsequent ones amidst uncertainty stemming from unobserved counterfactuals and limited samples. This uncertainty significantly impacts marginalized groups, which have historically faced exclusion and biased feedback. The classification includes uncertainties related to models, feedback, and predictions. The authors stress that while algorithmic methods cannot eliminate structural inequalities, they can aid socio-technical decision-making by revealing biases, outlining trade-offs, and facilitating governance.
Key facts
- Paper arXiv:2604.21711 introduces a taxonomy of uncertainty in sequential decision-making for fair ML
- Fairness is well studied in supervised learning but many real ML applications are online and sequential
- Prior decisions inform future ones under uncertainty due to unobserved counterfactuals and finite samples
- Under-represented groups are systematically under-observed due to historical exclusion and selective feedback
- Taxonomy covers model, feedback, and prediction uncertainty
- Algorithmic approaches alone cannot resolve structural inequalities
- Fair ML can support socio-technical decision systems by surfacing biases and clarifying trade-offs
Entities
Institutions
- arXiv