Brainrot: Deskilling and Addiction Overlooked in AI Safety Research
A new arXiv paper argues that cognitive risks from generative AI, such as deskilling and addiction, are neglected in AI safety and alignment literature. The authors note that current safety work focuses on discrimination, harmful content, information hazards, and malicious use cases like cybersecurity and child abuse. However, public discourse increasingly highlights threats to cognition and mental health from over-reliance on GenAI, including cognitive offloading leading to critical thinking atrophy and addiction from attachment to these systems. The paper calls for these overlooked risks to be addressed.
Key facts
- Paper titled 'Brainrot: Deskilling and Addiction are Overlooked AI Risks' on arXiv.
- arXiv ID: 2605.03512.
- Published as a cross submission.
- Current AI safety work limited to discrimination, hate speech, harmful content, information hazards, and malicious use.
- Public conversation focuses on cognitive, mental health, and welfare threats from GenAI over-reliance.
- Deskilling from cognitive offloading and critical thinking atrophy cited as examples.
- Addiction from attachment and dependence on GenAI systems cited as an example.
- These risks are rarely addressed in AI safety and alignment literature.
Entities
Institutions
- arXiv