ARTFEED — Contemporary Art Intelligence

Production-Deployed Trust Infrastructure for Autonomous AI Agents

ai-technology · 2026-05-11

A recent publication introduces MolTrust, a trust infrastructure for autonomous AI agents that has been deployed in production. It is based on W3C Verifiable Credentials 2.0 and Decentralized Identifiers v1.0, featuring on-chain anchoring on Base Layer 2. This system tackles the absence of a unified trust layer among autonomous agents, which currently number 69,000 and have completed 165 million transactions, amounting to a cumulative volume of 50 million USDC in a single marketplace. Regulatory bodies, including Singapore IMDA, NIST CAISI, and the EU AI Act, as well as prominent AI organizations like Anthropic and Google, recognize the necessity for a cryptographically verifiable trust framework. The architecture of MolTrust comprises four key components: identity, authorization, behavioral record, and portability, alongside a five-party accountability chain and the Agent Authorization Envelope (AAE). The paper can be found on arXiv with the identifier 2605.06738.

Key facts

  • 69,000 bots executed 165 million transactions across 50 million USDC on a single marketplace
  • Regulatory frameworks include Singapore IMDA, NIST CAISI, and EU AI Act
  • Major AI laboratories Anthropic and Google support the need for trust infrastructure
  • MolTrust is built on W3C Verifiable Credentials 2.0 and Decentralized Identifiers v1.0
  • On-chain anchoring uses Base Layer 2
  • Architecture includes four primitives: identity, authorization, behavioral record, portability
  • Includes a five-party accountability chain and Agent Authorization Envelope (AAE)
  • Paper published on arXiv with ID 2605.06738

Entities

Institutions

  • W3C
  • Base Layer 2
  • Singapore IMDA
  • NIST CAISI
  • EU AI Act
  • Anthropic
  • Google
  • arXiv

Sources