Epistemology Reframes Human-AI Complementarity
A new paper on arXiv reframes human-AI complementarity—the idea that humans and AI together outperform either alone—using epistemology. The authors argue that complementarity lacks precise theoretical anchoring, is only formalized as a post hoc accuracy indicator, ignores other interaction desiderata, and abstracts away from cost profiles, making it hard to achieve empirically. Drawing on computational reliabilism, they propose grounding complementarity in justificatory AI discourse, using historical instances to address these challenges.
Key facts
- Paper reframes human-AI complementarity using epistemology
- Complementarity claims humans with AI outperform either alone
- Lacks precise theoretical anchoring
- Formalized only as post hoc accuracy indicator
- Ignores other desiderata of human-AI interactions
- Abstracts away from magnitude-cost profile
- Hard to obtain in empirical settings
- Draws on computational reliabilism
Entities
Institutions
- arXiv