In-Context Examples Suppress LLM Scientific Knowledge Recall
A recent investigation indicates that incorporating in-context examples into large language models (LLMs) diminishes their capacity to remember and utilize pretrained scientific information, leading them to focus more on empirical pattern recognition. This study, recorded in arXiv:2604.27540, evaluated 60 latent structure recovery tasks spanning five scientific fields, involving 6,000 trials and four distinct models. The tasks included estimating reaction constants in chemistry and deducing demand elasticities in economics. While the displacement effect was observed consistently across various domains, the implications for accuracy differed. These results call into question prevailing beliefs regarding how LLMs utilize domain-specific knowledge when examples are presented.
Key facts
- Study published on arXiv with ID 2604.27540
- Tested 60 latent structure recovery tasks across five scientific domains
- Conducted 6,000 trials on four models
- In-context examples suppressed pretrained knowledge recall
- Shifted computation from knowledge-driven derivation to empirical pattern fitting
- Effect consistent across domains
- Accuracy consequences varied
- Tasks included chemistry reaction constants and economics demand elasticities
Entities
Institutions
- arXiv