ARTFEED — Contemporary Art Intelligence

NoisyCoconut: Enhancing LLM Reliability via Latent Space Reasoning

ai-technology · 2026-05-12

A new inference-time method called NoisyCoconut improves large language model (LLM) reliability by manipulating internal representations without retraining. The approach injects controlled noise into latent trajectories to generate diverse reasoning paths; unanimous agreement among these paths serves as a confidence signal, allowing models to abstain when uncertain. Experiments demonstrate effective coverage-accuracy tradeoffs across multiple reasoning benchmarks, requiring no access to training data or parameter modification. This provides a practical pathway to improving LLM output reliability while maintaining compatibility with existing models.

Key facts

  • NoisyCoconut is an inference-time method for LLMs
  • It injects controlled noise into latent trajectories
  • Diverse reasoning paths are generated from noise injection
  • Agreement among paths provides a confidence signal
  • Models can abstain when uncertain
  • No retraining or training data required
  • Effective coverage-accuracy tradeoffs demonstrated
  • Compatible with existing models

Entities

Sources