On Halting vs Converging in Recurrent Graph Neural Networks

📰 ArXiv cs.AI

arXiv:2604.25551v1 Announce Type: cross Abstract: Recurrent Graph Neural Networks (RGNNs) extend standard GNNs by iterating message-passing until some stopping condition is met. Various RGNN models have been proposed in the literature. In this paper, we study three such models: converging RGNNs, where all vertex representations must stabilise; output-converging RGNNs, where only the output classifications must stabilise; and halting RGNNs, where a per-vertex halting classifier determines when to

Published 29 Apr 2026
Read full paper → ← Back to Reads