LLMs for Natural Language Specification and Verification
Researchers are exploring the use of large language models (LLMs) to generate specifications and verify code using natural language, rather than formal languages. This approach aims to prevent LLM-generated code from containing security vulnerabilities. Preliminary results are promising, though prior attempts to synthesize formal specifications with LLMs had limited success. The study focuses on compositional verification of implementations against natural language specifications.
Key facts
- Recent frontier LLMs show strong performance in identifying security vulnerabilities in large open-source systems.
- The goal is to prevent LLMs from producing vulnerable implementations.
- Formal verification typically requires rigid formal language specifications.
- Prior work on using LLMs to synthesize formal specifications had limited success.
- This paper investigates using LLMs for both specification generation and verification in natural language.
- The approach uses compositional verification.
- Preliminary results suggest the approach is promising.
Entities
Institutions
- arXiv