Study finds LLMs persuade only psychologically susceptible humans on societal issues via trust and emotion
The Talk2AI framework has been launched to evaluate how persuasive and human-like large language models are when discussing divisive societal issues. This research involved 770 participants who took part in structured discussions with one of four prominent LLMs on topics such as climate change, misinformation on social media, and anxiety related to mathematics. A total of 60,000 conversational exchanges were recorded across 3,080 discussions in a four-wave longitudinal design. After each wave, participants assessed their initial beliefs, perceived changes in opinion, the humanness of the LLMs, their engagement with the topic, and provided written explanations. The findings, available in arXiv:2604.16935v1, highlight that LLMs' influence largely stems from trust and emotional engagement, despite containing logical flaws.
Key facts
- Study introduces Talk2AI longitudinal framework measuring LLM persuasiveness
- 770 participants engaged with four leading LLMs on polarizing societal topics
- Generated 3,080 conversations with over 60,000 conversational turns
- Topics included climate change, social media misinformation, and math anxiety
- Participants reported conviction, opinion change, LLM humanness, and self-donation
- Findings show longitudinal inertia in human convictions despite AI exposure
- LLMs persuade through trust in AI and emotional appeals with logical fallacies
- Research documented in arXiv:2604.16935v1 addresses scarce longitudinal evidence
Entities
Institutions
- arXiv