Scale-aware Message Passing For Graph Node Classification

arXiv:2411.19392v3 Announce Type: replace Abstract: Most Graph Neural Networks (GNNs) operate at the first-order scale, even though multi-scale representations are known to be crucial in domains such as image classification. In this work, we investigate whether GNNs can similarly benefit from multi-scale learning, rather than being limited to a fixed depth of $k$-hop aggregation. We begin by formalizing scale invariance in graph learning, providing theoretical guarantees and empirical evidence for its effectiveness. Building on this principle, we introduce ScaleNet, a scale-aware message-passing architecture that combines directed multi-scale feature aggregation with an adaptive self-loop mechanism. ScaleNet achieves state-of-the-art performance on six benchmark datasets, covering both homophilic and heterophilic graphs. To handle scalability, we further propose LargeScaleNet, which extends multi-scale learning to large graphs and sets new state-of-the-art results on three large-scale benchmarks. We also show that FaberNet's strength largely arises from multi-scale feature integration. Together with these state-of-the-art results, our findings suggest that scale invariance may serve as a valuable principle for improving the performance of single-order GNNs. The code for all experiments is available at \href{https://github.com/Qin87/ScaleNet/tree/iclr_scale_aware/}{this link}.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top