ARTFEED — Contemporary Art Intelligence

Trump Administration Signs AI Safety Testing Agreements with Google DeepMind, Microsoft, and xAI

ai-technology · 2026-05-07

The Trump administration has formalized AI safety testing agreements with Google DeepMind, Microsoft, and xAI, which will ensure government evaluations of advanced AI models prior to and following their launch. This move comes after concerns arose from Anthropic's choice to withhold its Claude Mythos model due to cybersecurity issues. Previously, Trump had criticized the Biden administration's voluntary safety checks as excessive regulation. The agreements were executed with the National Institute of Standards and Technology under the newly named Center for AI Standards and Innovation (CAISI), and White House National Economic Council Director Kevin Hassett hinted at a possible executive order for mandatory pre-release testing.

Key facts

  • Trump administration signed AI safety testing agreements with Google DeepMind, Microsoft, and xAI.
  • Agreements involve government safety checks on frontier AI models before and after release.
  • Previously, Trump had dismissed Biden-era voluntary safety checks as overregulation.
  • Trump rebranded the US AI Safety Institute to CAISI, removing 'safety' from the name.
  • Anthropic's decision to withhold its Claude Mythos model due to cybersecurity risks prompted the policy shift.
  • White House National Economic Council Director Kevin Hassett said Trump may issue an executive order mandating pre-release testing.
  • The agreements were signed with NIST under CAISI.
  • Fortune reported on the potential executive order.

Entities

Institutions

  • Google DeepMind
  • Microsoft
  • xAI
  • Anthropic
  • National Institute of Standards and Technology (NIST)
  • Center for AI Standards and Innovation (CAISI)
  • US AI Safety Institute
  • White House National Economic Council
  • Fortune

Locations

  • United States

Sources