LLMs Share Lexical Task Representations Across Prompt Styles
A new study on arXiv (2604.22027) investigates prompt sensitivity in large language models (LLMs), finding that despite performance variation across different prompts, models engage shared underlying mechanisms. Researchers compared instruction-based prompts (natural language task descriptions) and example-based prompts (few-shot demonstration pairs). They identified task-specific attention heads, dubbed lexical task heads, whose outputs literally describe the task and are shared across prompting styles, triggering subsequent answer production. This suggests that LLMs maintain consistent internal representations for tasks even when prompted differently, offering insight into behavioral variability.
Key facts
- Study compares instruction-based and example-based prompting styles
- Identifies lexical task heads shared across prompts
- Lexical task heads trigger answer production
- Published on arXiv with ID 2604.22027
- Focuses on prompt sensitivity in LLMs
- Finds common underlying mechanisms despite performance variation
- Task-specific attention heads literally describe the task
- Research provides insight into behavioral variability
Entities
Institutions
- arXiv