AgenticCache: Cache-Driven Planning for Embodied AI Agents
A recent paper on arXiv presents a novel planning framework named AgenticCache, designed to enhance efficiency and reduce costs for embodied AI agents that utilize large language models (LLMs). This framework takes advantage of plan locality in embodied tasks, where future plans can be anticipated based on current ones. Rather than invoking an LLM at each step, AgenticCache maintains a runtime cache of common plan transitions and verifies entries through a background Cache Updater. In tests across four multi-agent embodied benchmarks with 12 configurations (4 benchmarks × 3 models), AgenticCache achieves an average task success rate increase of 22%, a 65% decrease in simulation latency, and a 50% reduction in token usage. The paper is accessible on arXiv, with accompanying code available on GitHub.
Key facts
- AgenticCache is a cache-driven planning framework for embodied AI agents.
- It reduces per-step LLM calls by reusing cached plans.
- Embodied tasks exhibit strong plan locality.
- A background Cache Updater asynchronously validates cached entries.
- Tested on four multi-agent embodied benchmarks with three models.
- Improves task success rate by 22% on average.
- Reduces simulation latency by 65%.
- Lowers token usage by 50%.
Entities
Institutions
- arXiv