Agentic AI Systems Break Core Database Design Assumptions
Agentic AI workloads systematically violate the implicit assumptions underlying traditional database architecture, turning previously optional best practices into load-bearing infrastructure. For forty years, databases were designed for deterministic, human-authored applications with predictable queries, intentional writes, brief connections, and loud failures. Agentic AI systems reason their way to queries, producing unpredictable joins and holding connections during LLM inference pauses. They write autonomously, potentially retrying operations with incomplete data, and can exhaust connection pools through parallel sub-agents. The article documents a real failure where an agent processed 500 transactions with incomplete data after a silent API failure. Mitigations include role-level statement timeouts, soft deletes with agent identity columns, append-only event logs with idempotency keys, dedicated connection pools with PgBouncer transaction pooling, query tagging for agent-specific observability, and schema design optimized for LLM legibility. The author argues that patterns like least-privilege roles, row-level security, and idempotency keys must be implemented as defensive layers, assuming the caller may be wrong, retry, and not monitor results. No new technology is required, only a shift in treating the database as a defensive layer.
Key facts
- Agentic AI systems violate the implicit contract of database design at every layer simultaneously.
- Traditional databases assumed deterministic, human-authored queries that were code-reviewed and tested.
- Agents reason their way to queries, producing unpredictable joins and holding connections during LLM pauses.
- A documented failure involved an agent processing 500 transactions with incomplete data after a silent API failure.
- Statement timeouts should be set at the role level, not just the application level.
- Soft deletes with a deleted_by column are recommended for any table an agent can write to.
- Append-only event logs with idempotency keys prevent duplicate writes from agent retries.
- PgBouncer in transaction pooling mode can serve 500 agent connections using only 20 actual Postgres connections.
- Query comments with agent_id, task_id, and step enable agent-specific observability.
- Schema should be designed for LLM legibility with descriptive column names and docstring-like comments.
- Role-per-agent-type access with minimum privileges reduces blast radius of misbehaving agents.
- The patterns required are not new but become load-bearing infrastructure under agentic workloads.
Entities
Institutions
- Postgres
- PgBouncer