ARTFEED — Contemporary Art Intelligence

AI Research Shows Persona-Assigned LLMs Exhibit Human-Like Motivated Reasoning

ai-technology · 2026-04-20

A recent investigation indicates that large language models (LLMs) demonstrate reasoning influenced by motivation, similar to humans, when given distinct personas. The study involved eight LLMs, encompassing both open-source and proprietary varieties, and focused on two reasoning tasks derived from human-subject research. Researchers assessed whether assigning eight different personas based on four political and socio-demographic characteristics would lead to reasoning patterns in AI that align with those identities. Building on earlier research showing LLMs' vulnerability to human cognitive biases, this study specifically examines how these models may selectively arrive at conclusions that resonate with their assigned personas. The testing frameworks included evaluating the accuracy of misinformation headlines and analyzing numeric scientific data. These results are significant for understanding how AI might mimic flawed human reasoning in discussions surrounding critical societal matters such as climate change and vaccine safety. The findings were published on arXiv under the identifier arXiv:2506.20020v2. Motivated reasoning in humans often hinders rational decision-making and can intensify political polarization on a broader scale.

Key facts

  • Large language models exhibit motivated reasoning when assigned personas
  • Eight LLMs were tested including open-source and proprietary models
  • Research used eight personas across four political and socio-demographic attributes
  • Testing involved two reasoning tasks from human-subject studies
  • Tasks included veracity discernment of misinformation headlines
  • Tasks included evaluation of numeric scientific evidence
  • Study builds on prior research about LLM susceptibility to cognitive biases
  • Research published on arXiv with identifier arXiv:2506.20020v2

Entities

Institutions

  • arXiv

Sources