Genetic Algorithm DoS Attack Exploits 'Overthinking' in Large Reasoning Models
A new study from arXiv (2605.13338) introduces a hierarchical genetic algorithm (HGA) that exploits the 'overthinking' tendency of Large Reasoning Models (LRMs) to launch denial-of-service (DoS) attacks. LRMs, increasingly used for multi-step inference, produce excessively long and redundant reasoning traces when given incomplete or logically inconsistent inputs, significantly increasing inference latency and energy consumption. The proposed black-box framework systematically perturbs input problem structures to maximize response length and reflection, creating a resource exhaustion vector. The research highlights a novel vulnerability in computational availability for LRMs, with implications for AI system security and efficiency.
Key facts
- arXiv paper 2605.13338 introduces a DoS attack on LRMs using a hierarchical genetic algorithm.
- LRMs 'overthink' when faced with incomplete or logically inconsistent inputs, producing long reasoning traces.
- The attack increases inference latency and energy consumption, causing resource exhaustion.
- The framework operates as a black-box, perturbing logical structure of input problems.
- A composite fitness function optimizes response length and reflection.
- The study was announced on arXiv with type 'cross'.
- LRMs are increasingly integrated into systems requiring reliable multi-step inference.
- The vulnerability relates to computational availability.
Entities
Institutions
- arXiv