Political Plasticity in LLMs: Study Shows User Prompts Shift Responses
A recent study published on arXiv (2605.08415) presents the notion of 'political plasticity' in Large Language Models (LLMs), which refers to their ability to modify responses based on the context provided by users. The researchers established a testing framework consisting of 200 politically themed questions, focusing on economic and personal freedom, inspired by a previous framework from Lester (1996). Various methods were examined to provoke political bias, including simplified prompts and user-driven prompts with few-shot examples. Findings indicated that while system prompts were mostly ineffective, user prompts resulted in notable ideological changes, especially regarding Economic Freedom in larger and more recent models. A validation experiment reinforced these conclusions, emphasizing the adaptability of LLM political responses influenced by user input.
Key facts
- Study introduces 'political plasticity' as LLMs' capacity to adapt responses based on user context.
- Testing framework used 200 politically-oriented questions across economic and personal freedom axes.
- Framework based on prior work by Lester (1996).
- Methods tested: simplified and topic-based system prompts, user prompts with few-shot examples.
- System prompts were largely ineffective in shifting responses.
- User prompts successfully elicited significant ideological shifts.
- Shifts were most pronounced along the Economic Freedom axis in larger and newer models.
- Findings confirmed through a validation experiment.
Entities
Institutions
- arXiv