New Causal Fairness Framework for Continuous Protected Attributes
A recently released framework for ensuring fairness in machine learning, available on arXiv with ID 2605.05882v1, tackles biases in AI predictions associated with continuous protected attributes such as race, gender, and age. Traditional fairness concepts like Statistical Parity (SP) mandate that predictions remain unaffected by these attributes, but this approach can be too limiting when such attributes impact essential mediating variables for businesses. New causal approaches modify SP by differentiating permissible from impermissible causal paths and augmenting it with Predictive Parity (PP), which ensures that the predictor mirrors the valid influence of business necessities. The proposed framework presents SP and PP through path-specific partial derivatives, specifically designed for continuous protected attributes in structural causal models, to foster fair AI systems that consider intricate causal relationships.
Key facts
- arXiv ID: 2605.05882v1
- Announce type: cross
- Addresses biases in AI predictions
- Protected attributes include race, gender, age
- Classical fairness notion: Statistical Parity (SP)
- Causal fairness distinguishes allowed from not-allowed causal paths
- Introduces Predictive Parity (PP)
- New framework uses path-specific partial derivatives for continuous attributes
Entities
Institutions
- arXiv