ARTFEED — Contemporary Art Intelligence

New Theory Derives When Human-AI Teams Can Outperform Individuals

other · 2026-05-12

A new paper on arXiv (2605.08710) provides the first theoretical framework specifying when human-AI teams can outperform their best individual member. Analyzing confidence-based aggregation rules with signal detection theory and information theory, the authors derive four key results: a complementarity theorem showing teams outperform individuals only when error correlation ρ_HM is below a threshold ρ*; minimax bounds proving performance gains scale with metacognitive sensitivity difference; an impossibility result that no confidence-based rule achieves complementarity when ρ_HM ≥ ρ*; and a multi-class generalization where the threshold scales as ρ*_K ≈ ρ*/√(K-1). The model accurately predicts observed team accuracy on ImageNet-16H (R=0.94) and CIFAR-10H (R=0.91), with multi-class threshold scaling validated on human data (R=0.93). This addresses the gap that 70% of human-AI teams fail to outperform their best member, offering tight bounds and impossibility guarantees for complementarity.

Key facts

  • 70% of human-AI teams fail to outperform their best member
  • Paper derives tight bounds for confidence-based aggregation rules
  • Integrates signal detection theory with information-theoretic analysis
  • Complementarity theorem: teams outperform iff ρ_HM < ρ*
  • Minimax bounds show gains scale as Θ(√Δd)
  • Impossibility result: no confidence-based rule achieves complementarity when ρ_HM ≥ ρ*
  • Multi-class generalization: ρ*_K ≈ ρ*/√(K-1)
  • Model matches ImageNet-16H (R=0.94) and CIFAR-10H (R=0.91)

Entities

Institutions

  • arXiv

Sources