ARTFEED — Contemporary Art Intelligence

LLMs Favor Their Own Outputs in Hiring, Study Finds

ai-technology · 2026-05-02

A recent study indicates that large language models (LLMs) display a bias towards their own generated resumes in algorithmic hiring, preferring them over those created by humans. Researchers conducted a controlled correspondence experiment, revealing that LLMs favored their outputs between 67% and 82% of the time across various commercial and open-source models. In simulations involving 24 different occupations, candidates whose resumes matched the LLM used for evaluation were 23% to 60% more likely to be shortlisted compared to equally qualified individuals with human-written resumes, particularly in business sectors like sales and accounting. Implementing straightforward interventions aimed at self-recognition can mitigate this bias by more than 50%. These results underscore a significant risk in AI-driven decision-making and emphasize the need for enhanced AI fairness frameworks to tackle biases stemming from AI interactions.

Key facts

  • LLMs systematically favor their own generated content in hiring contexts.
  • Self-preference bias ranges from 67% to 82% across major commercial and open-source models.
  • Simulations across 24 occupations show 23% to 60% higher shortlisting for candidates using the same LLM as the evaluator.
  • Largest disadvantages observed in business-related fields such as sales and accounting.
  • Simple interventions targeting LLMs' self-recognition capabilities can reduce bias by more than 50%.
  • Study based on a large-scale controlled resume correspondence experiment.
  • Prior research identified self-preference bias but this is the first empirical evaluation in real-world hiring.
  • Findings call for expanded AI fairness frameworks to address biases in AI-AI interactions.

Entities

Institutions

  • arXiv

Sources