AI Hiring Supply Chains Complicate Bias Measurement and Accountability
A recent study published on arXiv (2604.22679) examines the complexities of bias evaluation and accountability in AI hiring systems due to supply chain dependencies. It reveals that contemporary AI hiring solutions function within intricate networks that include data suppliers, model creators, platform operators, and organizations that implement these systems, leading to a diffusion of responsibility. This diffusion results in two significant issues: biases arise from the interactions of various components rather than from individual parts, and proprietary setups hinder comprehensive assessments. The research incorporates a review of existing literature and regulatory frameworks, citing the EU AI Act, NYC Local Law 144, and Colorado’s AI Act. It contends that both technical and regulatory viewpoints fail to address this crucial supply chain issue, obstructing effective bias assessment and accountability.
Key facts
- arXiv paper 2604.22679 examines AI hiring supply chains
- Supply chains include data vendors, model developers, platform providers, and deploying organizations
- Fragmented responsibilities create bias from component interactions
- Proprietary configurations prevent integrated evaluation
- Regulatory responses include EU AI Act, NYC Local Law 144, Colorado's AI Act
- Both technical and regulatory perspectives overlook supply chain challenge
- Bias evaluation and accountability attribution are complicated by dependency chains
- Study uses literature review and regulatory analysis
Entities
Institutions
- arXiv
- European Union
- New York City
- Colorado
Locations
- New York City
- Colorado
- European Union