Systematic Analysis Reveals Proliferation of Verbal Tics in State-of-the-Art Large Language Models
A new study on arXiv (arXiv:2604.19139v1) looks into how verbal tics show up in large language models (LLMs). It highlights how repetitive phrases arise from techniques like Reinforcement Learning from Human Feedback (RLHF) and Constitutional AI. The research evaluates eight LLMs, including GPT-5.4, Claude Opus 4.7, and Gemini 3.1 Pro, utilizing 10,000 prompts across ten task categories. Notable verbal tics identified were phrases like "That's a great question!" and frequent words such as "delve" and "nuanced." The findings suggest these patterns are widespread among different models, pointing to potential issues with alignment methods and providing measurable data on tic frequency across various tasks and languages.
Key facts
- arXiv:2604.19139v1 announces a systematic analysis of verbal tics in large language models
- The study examines eight state-of-the-art LLMs including GPT-5.4, Claude Opus 4.7, and Gemini 3.1 Pro
- Researchers used a custom evaluation framework for standardized API-based assessment
- The analysis covered 10,000 prompts across 10 task categories in multiple languages
- Verbal tics identified include sycophantic openers like "That's a great question!" and "Awesome!"
- Pseudo-empathetic affirmations such as "I completely understand your concern" were documented
- Overused vocabulary includes words like "delve," "tapestry," and "nuanced"
- The phenomenon has grown as models evolve through alignment techniques like RLHF and Constitutional AI
Entities
Institutions
- arXiv