Novelty-Based Tree-of-Thought Improves LLM Reasoning
A new arXiv preprint introduces a novelty-based approach to Tree-of-Thought (ToT) search for large language models (LLMs), aiming to improve reasoning and planning while reducing computational costs. The method, inspired by width-based search in planning, defines a measurable concept of novelty to evaluate the uniqueness of each thought node relative to previously explored nodes in the search tree. Novelty is estimated by prompting the LLM itself, leveraging its pre-trained general knowledge. This metric is then used to prune branches, focusing the search on more promising paths. The approach addresses common LLM weaknesses such as brittleness, high time, and token consumption in reasoning tasks. The paper is available on arXiv under ID 2605.06040.
Key facts
- arXiv:2605.06040 introduces novelty-based Tree-of-Thought search for LLMs.
- Novelty measures uniqueness of a thought node compared to previously seen nodes.
- Novelty is estimated by prompting the LLM using its pre-trained knowledge.
- The method prunes branches to reduce time and token costs.
- Inspired by width-based search in planning.
- Aims to improve LLM reasoning and planning performance.
- Addresses brittleness and high computational costs in current LLM approaches.
- Published as a new arXiv preprint.
Entities
Institutions
- arXiv