LAION-Aesthetics Predictor biases against LGBTQ+ and men in image curation
A recent study available on arXiv (2601.09896) reviews the LAION-Aesthetics Predictor (LAP), a model designed for aesthetic assessment that aids in the curation of datasets for training visual generative AI systems such as Stable Diffusion. The researchers analyzed LAP's influence on the LAION-Aesthetics Dataset, which consists of approximately 1.2 billion images sourced from LAION-5B. They discovered that LAP tends to favor images with captions that reference women while neglecting those that mention men or LGBTQ+ individuals. Additionally, the study employed LAP to evaluate other datasets, uncovering inherent biases in aesthetic evaluations that mirror limited cultural perspectives. These results highlight concerns regarding representation in generative AI models and the potential perpetuation of stereotypes in AI-generated visuals.
Key facts
- Study audits LAION-Aesthetics Predictor (LAP)
- LAP used to curate LAION-Aesthetics Dataset (approx. 1.2B images) from LAION-5B
- LAP filters in images with captions mentioning women disproportionately
- LAP filters out images with captions mentioning men or LGBTQ+ people
- Study uses three datasets for audit
- LAP is widely used to train models like Stable Diffusion
- Biases reflect narrow cultural values in aesthetic judgment
- Research published on arXiv (2601.09896)
Entities
Institutions
- arXiv
- LAION
- Stable Diffusion