ARTFEED — Contemporary Art Intelligence

Recurrent Graph Neural Networks: Halting vs Converging Models

other · 2026-04-30

A new study posted on arXiv (2604.25551) looks into three types of Recurrent Graph Neural Networks (RGNNs). The first type, converging RGNNs, needs all vertex representations to stabilize, while the second type, output-converging RGNNs, only requires the stability of output classifications. The third type, halting RGNNs, uses a specific halting classifier for each vertex. The findings reveal that for undirected graphs, converging RGNNs and graded-bisimulation-invariant halting RGNNs have the same level of expressiveness, with output-converging RGNNs being at least as expressive. Furthermore, these results suggest that converging RGNNs align closely with the graded modal μ-calculus (μGML) in terms of classifiers that can be expressed in monadic second-order logic (MSO).

Key facts

  • arXiv paper 2604.25551 studies three RGNN models: converging, output-converging, and halting.
  • Converging RGNNs require all vertex representations to stabilise.
  • Output-converging RGNNs require only output classifications to stabilise.
  • Halting RGNNs use a per-vertex halting classifier to determine when to stop.
  • Over undirected graphs, converging RGNNs are equally expressive as graded-bisimulation-invariant halting RGNNs.
  • Output-converging RGNNs are at least as expressive as converging RGNNs.
  • Relative to MSO-expressible classifiers, converging RGNNs express exactly μGML.
  • Output-converging RGNNs express at least μGML.

Entities

Institutions

  • arXiv

Sources