ARTFEED — Contemporary Art Intelligence

CERTA: A New RAG System for Appropriate Trust in LLMs

publication · 2026-05-06

A recent research study introduces CERTA (Certainty Enhanced RAG for Trustworthy Answers), a system that enhances Retrieval Augmented Generation to express suitable levels of self-assuredness, fostering trust in AI technologies. This study tackles the issue of excessive confidence in Large Language Models (LLMs), which may generate misleading information, complicating users' ability to assess accuracy. CERTA emphasizes the connections among the question, context, and response to convey uncertainty effectively. Additionally, the Certain dataset has been developed to aid this initiative. Ultimately, the aim is to align with the human value of benevolence by promoting LLM self-reflection that yields trustworthy and truthful responses.

Key facts

  • The paper is titled 'I Don't Know' -- Towards Appropriate Trust with Certainty-Aware Retrieval Augmented Generation.
  • It was published on arXiv with ID 2605.00957.
  • The paper proposes CERTA (Certainty Enhanced RAG for Trustworthy Answers).
  • CERTA is a specialized RAG system that incorporates relevance between question, context, and answer.
  • The system aims to reflect uncertainty in answering questions.
  • The paper also creates the Certain dataset.
  • The goal is to build appropriate trust in AI systems.
  • The paper addresses over-confidence and hallucination in LLMs.

Entities

Institutions

  • arXiv

Sources