EVIL Framework Uses LLM-Guided Evolution to Create Interpretable Algorithms for Dynamical Systems
A novel research methodology, known as EVIL (Evolving Interpretable algorithms with LLMs), utilizes evolutionary search directed by large language models to create straightforward, interpretable algorithms for inferring dynamical systems. Rather than relying on extensive datasets to train neural networks, EVIL generates pure Python/NumPy programs that perform zero-shot, in-context inference across various datasets. This framework has been tested on three specific tasks: predicting next events in temporal point processes, estimating rate matrices for Markov jump processes, and imputing time series data. Remarkably, a single evolved algorithm effectively generalizes across all evaluation datasets without the need for training on each dataset, akin to an amortized inference model. This research, documented in arXiv:2604.15787v1, marks the first instance of LLM-guided program evolution yielding a compact inference function for these problems. The algorithms discovered are noted for their simplicity and interpretability, standing in contrast to more intricate neural network methods. This study illustrates how evolutionary search, informed by large language models, can yield efficient inference functions that do not require dataset-specific training.
Key facts
- EVIL stands for Evolving Interpretable algorithms with LLMs
- Uses LLM-guided evolutionary search to discover simple algorithms
- Evolves pure Python/NumPy programs for dynamical systems inference
- Performs zero-shot, in-context inference across datasets
- Applied to three tasks: next-event prediction, rate matrix estimation, time series imputation
- Single evolved algorithm generalizes across all evaluation datasets
- No per-dataset training required (amortized inference model)
- First work showing LLM-guided program evolution can discover compact inference functions for dynamical-systems problems
Entities
—