GSAR: A Typed Grounding Framework for Multi-Agent LLM Hallucination Detection
A new framework called GSAR (Groundedness Scoring And Replanning) addresses hallucination in multi-agent LLM systems by introducing a four-way typology for claims: grounded, ungrounded, contradicted, and complementary. Unlike existing binary or scalar evaluators, GSAR assigns evidence-type-specific weights reflecting epistemic strength and computes an asymmetric contradiction-penalized weighted groundedness score. This score feeds into a three-tier decision function for downstream action. The framework treats non-redundant alternative perspectives as first-class evidence. The paper is published on arXiv with ID 2604.23366.
Key facts
- GSAR partitions claims into grounded, ungrounded, contradicted, and complementary types.
- It assigns evidence-type-specific weights reflecting epistemic strength.
- It computes an asymmetric contradiction-penalized weighted groundedness score.
- The score is coupled to a three-tier decision function.
- The framework is designed for autonomous multi-agent LLM systems investigating operational incidents.
- It aims to ensure claims are grounded in observed evidence rather than model-internal inference.
- Existing methods treat supporting evidence as interchangeable and emit a single signal.
- The paper is available on arXiv with ID 2604.23366.
Entities
Institutions
- arXiv