LLMs vulnerable to fringe science manipulation, study finds
An investigation currently being assessed in AI and Ethics reveals that fringe scientific content can readily influence large language models (LLMs), undermining public comprehension of science. Researchers adapted specific LLMs to emphasize information from chosen fringe studies concerning the Fine Structure Constant and Gravitational Waves. These modified models generated articulate, persuasive responses that went against established scientific agreement, making it challenging for laypeople to recognize the misinformation. The findings indicate that LLMs are not substitutes for expert analysis and present dangers related to the dissemination of false information.
Key facts
- Study under review in AI and Ethics
- LLMs modified to prioritize fringe papers on Fine Structure Constant and Gravitational Waves
- Altered models contradicted scientific consensus
- Non-experts struggled to detect misleading answers
- LLMs vulnerable to manipulation
- LLMs cannot replace expert judgment
- Risks for public understanding of science
- Potential spread of misinformation
Entities
Institutions
- AI and Ethics