Rose 1 reduces LLM input tokens by 70% without quality loss
Adola has launched Rose 1, a context compression tool that reduces LLM input tokens by 70% while maintaining answer quality. The tool trims noisy context such as duplicate notes, old search snippets, and unrelated history before a model call, keeping only essential elements like schema, policy exceptions, account tier, and citation trails. Rose 1 is designed for use cases where context piles up, including agent traces, retrieval, prompt gateways, and support copilots. It can compress ticket history, policy docs, and account context, and works across reasoning, science, and math checks with no measured drop in performance. Users can create a workspace, issue a project key, and run the playground to measure results.
Key facts
- Rose 1 reduces LLM input tokens by 70%.
- It keeps answers stable across reasoning, science, and math checks.
- The tool trims noisy context like duplicate notes and unrelated ticket history.
- It preserves schema, policy exception, account tier, and citation trail.
- Rose 1 is available from Adola at adola.app.
- It is designed for agent traces, retrieval, prompt gateways, and support copilots.
- Users can create a workspace and run the playground to test.
- No measured drop in performance with 70% compression.
Entities
Institutions
- Adola