Neural Network Verification Errors Grow Exponentially with Depth in Convex Relaxations
A study published on arXiv (ID: 2604.18728v1) examines the trade-offs in neural network verification systems that use convex relaxations to improve performance. These systems often represent neural networks as constraint programs, with sound and complete representations requiring integer constraints to simulate activations. Recent approaches convexly relax these integer constraints, which enhances computational efficiency but sacrifices soundness by considering outputs unreachable by the original network. The research investigates both qualitative and quantitative aspects of the worst-case divergence between original networks and their convex relaxations. The relaxation space forms a lattice where the top element corresponds to a full relaxation with every neuron linearized, while the bottom element represents the original network. Analytical upper and lower bounds for the ℓ∞-distance between fully relaxed and original outputs reveal that this distance grows exponentially with network depth and linearly with input radius. This exponential growth highlights significant verification errors in deep networks when using convex relaxations, impacting the reliability of neural network verification methods that prioritize performance over accuracy.
Key facts
- The study analyzes neural network verification systems using convex relaxations
- Convex relaxations improve performance but sacrifice soundness by considering unreachable outputs
- The relaxation space forms a lattice with full relaxation at the top and original network at the bottom
- The ℓ∞-distance between fully relaxed and original outputs grows exponentially with network depth
- The distance also grows linearly with input radius
- The research provides both analytical upper and lower bounds for this distance
- The work was published on arXiv with identifier 2604.18728v1
- The announcement type is listed as cross
Entities
Institutions
- arXiv