Continual Graph Learning: A Survey
arXiv:2301.12230v2 Announce Type: replace
Abstract: Continual Graph Learning (CGL) enables models to incrementally learn from streaming graph-structured data without forgetting previously acquired knowledge. Experience replay is a common solution that reuses a subset of past samples during training. However, it may lead to information loss and privacy risks. Generative replay addresses these concerns by synthesizing informative subgraphs for rehearsal. Existing generative replay approaches often rely on graph condensation via distribution matching, which faces two key challenges: (1) the use of random feature encodings may fail to capture the characteristic kernel of the discrepancy metric, weakening distribution alignment; and (2) matching over a fixed small subgraph cannot guarantee low risk on previous tasks, as indicated by domain adaptation theory. To overcome these limitations, we propose an Adversarial Condensation based Generative Replay (ACGR) framwork. It reformulates graph condensation as a min-max optimization problem to achieve better distribution matching. Moreover, instead of learning a single subgraph, we learn its distribution, allowing for the generation of multiple samples and improved empirical risk minimization. Experiments on three benchmark datasets demonstrate that ACGR outperforms existing methods in both accuracy and stability.