Derivation Prompting: Logic-Based Method Improves RAG Accuracy
Derivation Prompting, a novel technique, seeks to minimize hallucinations and flawed reasoning in Large Language Models (LLMs) during knowledge-intensive Question Answering tasks. Drawing inspiration from logical derivations, this method creates a clear derivation tree by methodically applying established rules to initial hypotheses. It is tailored for the generation phase of the Retrieval-Augmented Generation (RAG) framework. In a particular case study, Derivation Prompting demonstrated a notable decrease in unacceptable responses when compared to conventional RAG and long-context window approaches. This research was made available on arXiv in the computer science section, specifically under Computation and Language.
Key facts
- Derivation Prompting is a novel prompting technique for the generation step of RAG.
- It is inspired by logic derivations, deriving conclusions from initial hypotheses via systematic rule application.
- The method constructs an interpretable derivation tree to add control over generation.
- Applied in a case study, it significantly reduced unacceptable answers.
- Compared to traditional RAG and long-context window methods.
- Addresses hallucinations and erroneous reasoning in LLMs for knowledge-intensive tasks.
- Published on arXiv under Computer Science > Computation and Language.
- The paper was submitted on May 14, 2025.
Entities
Institutions
- arXiv