ARTFEED — Contemporary Art Intelligence

COEVO Framework Unifies Functional Correctness and PPA Optimization in LLM-Based RTL Generation

ai-technology · 2026-04-20

A new co-evolutionary framework called COEVO addresses limitations in LLM-based RTL code generation by jointly optimizing functional correctness alongside area, delay, and power metrics. Current methods typically separate these objectives, discarding partially correct but architecturally promising candidates through sequential pipelines or binary correctness gates. COEVO integrates correctness as a continuous dimension within a single evolutionary loop, moving beyond scalar fitness reductions that obscure trade-offs. The framework employs an enhanced testbench providing fine-grained scoring and detailed diagnostics. This approach contrasts with existing hierarchical reward dependencies and evolutionary search methods that prioritize correctness before PPA quality. The research was published on arXiv under identifier 2604.15001v2.

Key facts

  • COEVO is a co-evolutionary framework for LLM-based RTL generation
  • It unifies functional correctness and PPA optimization in a single evolutionary loop
  • Existing approaches decouple correctness and PPA objectives
  • Current methods discard partially correct but architecturally promising candidates
  • PPA optimization typically occurs only after correctness is achieved
  • Existing methods reduce multi-objective PPA space to single scalar fitness
  • COEVO formulates correctness as continuous co-optimization dimension
  • The framework uses enhanced testbench for fine-grained scoring and diagnostics

Entities

Institutions

  • arXiv

Sources