LLMs Show Surprising Pro-Japan Cultural Bias, New Study Finds
A new study from arXiv reveals that large language models (LLMs) exhibit a pronounced cultural bias toward Japan, contrary to prior assumptions of Western or Anglocentric dominance. The researchers created the Culture-Related Open Questions (CROQ) dataset, based on a comprehensive taxonomy, to test LLMs on cultural questions. Results showed a clear tendency for models to favor Japanese culture over others. Additionally, when prompted in high-resource languages like English, LLMs produced more diverse outputs and showed less inclination toward countries where the input language is official. The study also investigates at which point in LLM training these biases emerge. This work highlights hidden regional preferences in AI systems, challenging earlier findings on cultural bias.
Key facts
- Study proposes CROQ dataset based on comprehensive taxonomy of culture-related open questions.
- LLMs show clear tendency toward Japan, contrary to previous Western bias findings.
- High-resource language prompts yield more diverse outputs and reduce bias toward official-language countries.
- Research investigates at which point in LLM training cultural biases appear.
- Study published on arXiv with ID 2604.21751.
- LLMs previously known for Western and Anglocentric viewpoints.
- No prior work specifically highlighted LLM regional preferences for cultural questions.
- Findings challenge existing assumptions about cultural bias in AI.
Entities
Institutions
- arXiv
Locations
- Japan