EU AI Act's 'Appropriate Accuracy' Depends on Contextual Normative Choices
A new paper on arXiv challenges the notion that AI accuracy is purely objective and technical, arguing it depends on context-dependent normative decisions. The analysis uses the 2024 European Union AI Act, which mandates an 'appropriate level of accuracy' for high-risk systems, as a case study. The authors identify four key choices shaping accuracy evaluation: selecting metrics, balancing multiple metrics, measuring against representative data, and determining acceptance thresholds. These techno-normative choices determine which errors are prioritized, how risks are distributed, and how trade-offs are resolved. The paper provides a legal-technical framework for rigorous AI deployment.
Key facts
- Paper published on arXiv with ID 2604.03254v2
- Challenges the view that accuracy is an objective, measurable, purely technical property
- Uses the 2024 European Union AI Act as a primary case study
- EU AI Act mandates an 'appropriate level of accuracy' for high-risk systems
- Identifies four choices: selecting metrics, balancing multiple metrics, measuring against representative data, determining acceptance thresholds
- Argues that evaluating AI performance depends on context-dependent normative decisions
- Techno-normative choices affect error prioritization, risk distribution, and trade-offs
- Provides a legal-technical analysis for rigorous AI deployment
Entities
Institutions
- arXiv
- European Union
Locations
- European Union