Three Regimes of Context-Parametric Conflict in LLMs
A new arXiv paper (2605.11574v1) proposes a three-regime framework to resolve contradictions in how large language models handle conflicts between training knowledge and contradicting documents. Prior studies found models either stubbornly retain trained answers (ignoring documents nearly half the time) or defer to context (approximately 96% of the time). The authors argue these contradictions arise because experiments studied three distinct processing situations without distinguishing them. Regime 1 (single-source updating) is predicted by evidence coherence; Regime 2 (competitive integration) by parametric certainty; Regime 3 (task-appropriate selection) by task knowledge requirement. The framework formalizes a distinction between parametric strength (exposure frequency) and parametric uniqueness.
Key facts
- arXiv paper 2605.11574v1 proposes three-regime framework for context-parametric conflict in LLMs
- Prior studies show contradictory results: models ignore documents ~50% of the time vs defer ~96%
- Regime 1: single-source updating, predicted by evidence coherence
- Regime 2: competitive integration, predicted by parametric certainty
- Regime 3: task-appropriate selection, predicted by task knowledge requirement
- Framework distinguishes parametric strength (exposure frequency) from parametric uniqueness
- Authors argue contradictions dissolve when distinguishing three processing situations
- Paper provides empirical validation of the framework
Entities
Institutions
- arXiv