ARTFEED — Contemporary Art Intelligence

PAC-Bayesian Theory Establishes Generalization Bounds for Early-Exit Neural Networks

ai-technology · 2026-04-20

A new theoretical framework addresses the lack of understanding about generalization in early-exit neural networks, which allow predictions to exit at intermediate layers for 2-8× inference speedup. The paper establishes unified PAC-Bayesian analysis with novel entropy-based bounds that depend on exit-depth entropy H(D) and expected depth 𝔼[D] rather than maximum depth K. Sample complexity is shown as 𝒪((𝔼[D]·d + H(D))/ε²). The analysis yields explicit constructive constants with leading coefficient √2ln2 ≈ 1.177 and complete derivation. Sufficient conditions are established under which adaptive-depth networks strictly outperform fixed-depth counterparts. The work fills a theoretical gap explicitly identified in recent surveys of the field.

Key facts

  • Early-exit neural networks enable adaptive computation with 2-8× inference speedup
  • Generalization properties lacked theoretical understanding until this paper
  • Establishes unified PAC-Bayesian framework for adaptive-depth networks
  • Proves first generalization bounds depending on exit-depth entropy H(D) and expected depth 𝔼[D]
  • Sample complexity is 𝒪((𝔼[D]·d + H(D))/ε²)
  • Analysis yields leading coefficient √2ln2 ≈ 1.177 with complete derivation
  • Establishes conditions where adaptive-depth networks outperform fixed-depth counterparts
  • Addresses theoretical gap identified in recent surveys

Entities

Institutions

  • arXiv

Sources