LLM Resume Summaries Show Name-Based Bias in Evaluative Framing
A recent investigation published on arXiv indicates that large language models (LLMs) demonstrate name-conditioned bias in resume summary generation for recruitment, despite the factual information being consistent. The study examined close to one million summaries generated by four models, applying systematic perturbations of race-gender names, utilizing both synthetic resumes and actual job listings. The researchers discovered that the evaluative language subtly shifts with different names, particularly at the extremes of the distribution, with open-source models showing significant variation. This bias can lead to directional harm manifesting as symmetric instability, which can bypass traditional fairness assessments, underscoring a potential for LLM-to-LLM automation bias in the hiring process.
Key facts
- Study analyzes nearly one million resume summaries from 4 LLMs
- Uses systematic race-gender name perturbations
- Factual content remains largely stable
- Evaluative language shows name-conditioned variation in distribution extremes
- Open-source models show more bias
- Hiring simulation demonstrates symmetric instability
- Bias may evade conventional fairness audits
- Highlights potential for LLM-to-LLM automation bias
Entities
Institutions
- arXiv