Viverra: AI Tool That Verifies Code Correctness
A new system called Viverra automatically produces formally verified annotations alongside LLM-generated C code to guarantee correctness. Developed by researchers, it prompts an LLM to synthesize a program with candidate assertions for safety and correctness, then verifies them via bounded model checkers. Evaluated on 18 diverse tasks, Viverra addresses the limitation that text-to-code lacks correctness guarantees, which otherwise forces developers to manually review and test AI-generated code, negating productivity gains.
Key facts
- Viverra automatically produces formally verified annotations alongside generated code.
- It prompts an LLM to synthesize a C program with candidate assertions.
- Assertions express safety and correctness properties.
- Verification is done via a portfolio of bounded model checkers.
- Evaluation was performed on 18 diverse programming tasks.
- The system addresses a fundamental limitation of text-to-code: no correctness guarantees.
- Without such guarantees, developers must review, test, and maintain generated code.
- Viverra aims to aid user understanding of the generated program.
Entities
Institutions
- arXiv