LLMs Overuse External Tools Due to Knowledge Epistemic Illusion
A recent investigation published in arXiv preprint 2604.19749 indicates that large language models (LLMs) often rely excessively on external tools despite having adequate internal knowledge. This widespread issue, referred to as 'tool overuse,' affects various LLMs. The researchers discovered a phenomenon called 'knowledge epistemic illusion,' where these models incorrectly assess their knowledge limits, resulting in unnecessary reliance on tools. To mitigate this, they suggest implementing a knowledge-aware epistemic boundary alignment strategy that utilizes direct preference optimization, achieving an 82.8% reduction in tool usage while enhancing accuracy. Additionally, the study establishes a causal relationship between reward structures and tool utilization behavior.
Key facts
- Tool overuse is a pervasive phenomenon across diverse LLMs.
- The root cause is a knowledge epistemic illusion: models misjudge internal knowledge boundaries.
- A knowledge-aware epistemic boundary alignment strategy reduces tool usage by 82.8%.
- The strategy also yields an accuracy improvement.
- The study establishes a causal link between reward structures and tool-use behavior.
- The research is published on arXiv with ID 2604.19749.
- The paper was announced as a new submission.
- The study focuses on equipping LLMs with external tools.
Entities
Institutions
- arXiv