ARTFEED — Contemporary Art Intelligence

Parallel-in-Time Training Boosts Recurrent Neural Networks for Dynamical Systems

ai-technology · 2026-05-14

A new arXiv preprint (2605.12683) explores parallel-in-time training algorithms for recurrent neural networks (RNNs) used in dynamical systems reconstruction (DSR). Classical backpropagation through time has linear runtime complexity O(T), limiting sequence length. The paper examines two algorithm classes leveraging parallel associative scans to achieve logarithmic time complexity O(log T). The first class includes models with linear non-autonomous dynamics and nonlinear readouts, such as modern State Space Models (SSMs). The second class comprises general nonlinear models parallelized via the DEER framework. This breakthrough enables processing of much longer sequences, opening new avenues for DSR in science and engineering.

Key facts

  • arXiv:2605.12683v1 published on arXiv
  • Focuses on parallel-in-time training of RNNs for dynamical systems reconstruction
  • Classical backpropagation through time has O(T) complexity
  • New algorithms achieve O(log T) complexity
  • Two classes studied: linear non-autonomous models with nonlinear readouts (e.g., SSMs) and general nonlinear models using DEER framework
  • Both classes use parallel associative scans as core primitive
  • Enables processing of longer sequences for DSR
  • Addresses fundamental challenge in science and engineering

Entities

Institutions

  • arXiv

Sources