Mixed Precision Training Framework for Neural ODEs
A new mixed precision training framework for neural ordinary differential equations (Neural ODEs) has been proposed. The framework uses low-precision computations for evaluating the velocity parameterized by the neural network and for storing intermediate values, while keeping weights in high precision. It incorporates explicit ODE solvers and a custom backpropagation scheme. The approach aims to reduce computational costs and memory usage without sacrificing accuracy. Experiments on various learning tasks demonstrate its effectiveness. The work addresses the challenge of applying mixed precision to continuous-time architectures, which previously suffered from roundoff errors and instabilities.
Key facts
- Mixed precision training framework for Neural ODEs proposed
- Low-precision computations for velocity evaluation and intermediate storage
- Weights stored in high precision
- Explicit ODE solvers and custom backpropagation used
- Reduces computational costs and memory usage
- Effective on range of learning tasks
- Addresses roundoff errors and instabilities in continuous-time architectures
- Published on arXiv: 2510.23498v2
Entities
Institutions
- arXiv