AI Chatbots Show Western Individualist Bias Across Cultures
A new study from arXiv (2604.22153) reveals that leading AI systems—Claude Sonnet 4.5, GPT-5.4, and Gemini 2.5 Flash—consistently provide Western-style individualist advice, even to users from collectivist societies. Researchers tested ten personal dilemmas across 10 countries, 5 continents, and 7 languages, scoring 840 responses against World Values Survey Wave 7 data. The mean gap between AI advice and local values was +0.76 on a 1-5 scale (t=15.65, p<0.001), with the largest discrepancies in Nigeria (+1.85) and India (+0.82). Japan was the sole exception, where AI treated users as more group-oriented than actual local values. The findings raise ethical questions about cultural bias in AI systems deployed globally.
Key facts
- Study tested Claude Sonnet 4.5, GPT-5.4, and Gemini 2.5 Flash
- Ten real-life personal dilemmas used across 10 countries and 7 languages
- Total of 840 scored responses compared against World Values Survey Wave 7
- Mean gap of +0.76 on a 1-5 scale (t=15.65, p<0.001)
- Largest gaps in Nigeria (+1.85) and India (+0.82)
- Japan was the only country where AI advice was more collectivist than local values
Entities
Institutions
- arXiv
- World Values Survey
Locations
- Nigeria
- India
- Japan