ARTFEED — Contemporary Art Intelligence

Bernstein Polynomials Enable Smooth Yet Efficient Neural Activation

other · 2026-05-06

A new paper on arXiv proposes a general smoothing framework for neural network activation functions using constructive approximation theory. The authors introduce the Bernstein Linear Unit (BerLU), which employs Bernstein polynomials to create a differentiable quadratic transition region. This design eliminates singularities at the origin while preserving a piecewise linear structure, balancing optimization stability with computational efficiency. The method addresses limitations of existing approaches: piecewise linear functions suffer from non-differentiability, and smooth functions incur high computational costs from transcendental operations. Theoretical analysis confirms strict continuity and differentiability. The work is relevant for deep learning practitioners seeking efficient yet stable activation functions.

Key facts

  • Paper published on arXiv with ID 2605.02591
  • Proposes a smoothing framework based on constructive approximation theory
  • Introduces Bernstein Linear Unit (BerLU) activation function
  • Uses Bernstein polynomials to construct a differentiable quadratic transition region
  • Eliminates singularities while maintaining piecewise linear structure
  • Addresses trade-off between optimization stability and computational efficiency
  • Theoretical analysis guarantees strict continuity and differentiability
  • Publication date: 2025 (arXiv:2605.02591)

Entities

Institutions

  • arXiv

Sources