AI Safety Depends on Interaction Topology, Not Model Scale
A new position paper argues that safety in agentic AI is determined by interaction topology rather than model weights or alignment. The paper identifies three topology-driven pathologies: ordering instability, information cascades, and functional collapse. Evidence across model families and scales shows that scaling to more capable models does not mitigate these issues. The findings challenge the assumption that safe individual models compose into safe multi-agent behavior.
Key facts
- Safety in agentic AI depends on interaction topology, not model weights.
- Three pathologies: ordering instability, information cascades, functional collapse.
- Evidence spans multiple model families and scales.
- Scaling to more capable models does not resolve topology-driven issues.
- Paper challenges assumption that safe individual models compose into safe multi-agent behavior.
Entities
—