ARTFEED — Contemporary Art Intelligence

AI companies use fear of apocalypse to distract from real harms, critics say

ai-technology · 2026-04-29

AI companies including Anthropic and OpenAI are accused of using fear-mongering about existential risks to distract from current harms like environmental damage and labor exploitation. Anthropic recently announced its Claude Mythos model can find cybersecurity bugs surpassing human experts, but declined to release it due to safety concerns. Critics point to missing false positive rates and lack of comparison with existing tools. Shannon Vallor of the University of Edinburgh argues this narrative makes people feel powerless and look to companies for protection. Emily M. Bender of the University of Washington calls it a pattern of unsubstantiated claims. Both OpenAI and Anthropic were founded with safety missions but are now pursuing public offerings. Google dropped its AI weapons red lines, and Anthropic abandoned its safety-first training policy. Vallor says these companies are motivated by market dominance, not altruism.

Key facts

  • Anthropic announced Claude Mythos, claiming it surpasses human experts in finding cybersecurity bugs.
  • Anthropic declined to release Mythos due to safety concerns, citing potential severe fallout for economies, public safety, and national security.
  • Critics note Anthropic did not provide false positive rates or compare Mythos to existing security tools.
  • Shannon Vallor says fear-mongering makes people feel powerless and look to AI companies for protection.
  • Emily M. Bender calls the pattern of unsubstantiated claims a distraction from environmental destruction and labor exploitation.
  • OpenAI CEO Sam Altman criticized Anthropic's 'fear-based marketing' but has a history of apocalyptic warnings himself.
  • Both OpenAI and Anthropic were founded with safety missions but are now pursuing public offerings.
  • Google dropped its red lines around building AI weapons; Anthropic abandoned its policy to not train models without adequate safety measures.

Entities

Institutions

  • Anthropic
  • OpenAI
  • Google DeepMind
  • AI Now Institute
  • University of Edinburgh
  • University of Washington
  • BBC
  • Google
  • Meta
  • xAI

Locations

  • United Kingdom
  • United States

Sources