ARTFEED — Contemporary Art Intelligence

Game Theory Suggests Self-Interest Could Drive Global Moratorium on Superintelligent AI

ai-technology · 2026-05-06

A new paper on arXiv uses game theory to argue that a moratorium on Artificial Superintelligence (ASI) can align with national self-interest, contrary to prevailing views. By modeling strategic interactions between geopolitical superpowers, the authors analyze the trade-off between technological supremacy benefits and catastrophic risks of uncontrolled ASI. The analysis shows that when the perceived cost of losing control rises sufficiently, it becomes rational for each state to impose a moratorium. Empirical evidence indicates global perception of ASI risk is increasing, making a stable, rational moratorium more plausible in the current geopolitical landscape.

Key facts

  • Paper uses game theory to argue for ASI moratorium based on self-interest.
  • Models strategic interactions between geopolitical superpowers.
  • Trade-off between technological supremacy and catastrophic risks of uncontrolled ASI.
  • Moratorium becomes rational when perceived cost of loss of control is high.
  • Empirical evidence shows rising global perception of ASI risk.
  • Published on arXiv under Computer Science > Computers and Society.
  • Contrary to prevailing view that moratorium is against self-interest.
  • Stable, rational moratorium increasingly plausible in current geopolitical landscape.

Entities

Institutions

  • arXiv

Sources