ARTFEED — Contemporary Art Intelligence

Neural Operators Outperform MLPs in Function Interpolation

ai-technology · 2026-05-11

A recent study reinterprets neural operators (NOs) as effective function interpolators, showcasing their enhanced performance compared to traditional multilayer perceptrons and Kolmogorov–Arnold Networks on analytical benchmarks. By incorporating an auxiliary base-space, finite-dimensional functions are viewed as operators that act through composition. This approach requires fewer parameters and less training time, while achieving comparable or improved accuracy. In a practical application, a two-dimensional Tensorized Fourier Neural Operator (TFNO) ensemble was utilized on the nuclear chart, learning to correct state-of-the-art nuclear mass models. The TFNO ensemble recorded a held-out root-mean-square error of 198.2 keV, ranking it among the top recent neural network methodologies.

Key facts

  • Neural operators (NOs) are designed to learn maps between infinite-dimensional function spaces.
  • The study introduces an auxiliary base-space to reframe finite-dimensional functions as operators.
  • NOs match or outperform MLPs and Kolmogorov–Arnold Networks in accuracy.
  • NOs require significantly fewer parameters and training time.
  • A two-dimensional Tensorized Fourier Neural Operator (TFNO) was applied to the nuclear chart.
  • The TFNO ensemble achieved a held-out root-mean-square error of 198.2 keV.
  • The approach ranks among the best recent neural-network methods for nuclear mass models.
  • The research is published on arXiv under ID 2605.07792.

Entities

Institutions

  • arXiv

Sources