ARTFEED — Contemporary Art Intelligence

Unified Entropy Minimization Framework Boosts Whisper ASR Across 20+ Domains

ai-technology · 2026-05-12

A new study published on arXiv presents a groundbreaking approach to entropy minimization in autoregressive models, particularly focusing on test-time adaptation. This method innovatively divides its objectives into two parts: employing a token-level policy gradient loss alongside a token-level entropy loss, enhancing traditional strategies such as teacher forcing and reinforcement learning. Evaluations using OpenAI's Whisper ASR reveal notable advancements across more than 20 diverse areas, addressing challenges like background noise, multiple accents, and multilingualism. The research paper, titled "Rethinking Entropy Minimization in Test-Time Adaptation for Autoregressive Models," is accessible on arXiv under the identifier 2605.08186.

Key facts

  • The study derives a rigorous formulation of entropy minimization for autoregressive models.
  • The objective decomposes into token-level policy gradient loss and token-level entropy loss.
  • Prior methods are reinterpreted as partial realizations of this unified formulation.
  • Whisper ASR is used as a testbed for the approach.
  • Performance improves across more than 20 diverse domains.
  • Domains include acoustic noise, accents, and multilingual settings.
  • The paper is available on arXiv under reference 2605.08186.
  • The study addresses theoretical fragmentation in test-time adaptation for generative models.

Entities

Institutions

  • arXiv
  • OpenAI

Sources