PACIFIER: Pacing Opinion Depolarization via a Unified Graph Learning Framework
arXiv:2602.23390v3 Announce Type: replace-cross
Abstract: PACIFIER: Pacing Opinion Depolarization via a Unified Graph Learning Framework
Opinion polarization moderation under the Friedkin-Johnsen (FJ) model is typically treated as an analytical optimization problem. Existing algorithms rely on linear steady-state analysis and repeated equilibrium recomputation, leading to poor scalability and limited adaptability to rich intervention regimes. This paper explores whether polarization moderation can be reformulated as a graph-based sequential planning problem.
We propose PACIFIER, the first unified graph-learning and graph reinforcement learning framework for FJ-based intervention. It reformulates canonical MI and ME problems as ordered graph-intervention tasks evaluated by Accumulated Normalized Polarization (ANP). The framework includes PACIFIER-RL for long-horizon value learning and PACIFIER-Greedy for efficient myopic ranking, supporting cost-aware moderation, continuous opinions, and topology-altering node removal.
The core challenge is small-to-large transfer. PACIFIER is trained on synthetic graphs with fewer than 50 nodes but must generalize to large real-world networks. To achieve this, we integrate four scale-compatible designs: a two-echo-chamber training distribution, anchor-and-mark history encoding, normalized global features, and residual-polarization rewards. These components make topology-preserving FJ moderation observable and learnable across graph scales.
Experiments on 15 real-world Twitter networks (up to 155,599 nodes) show that PACIFIER matches analytical solvers in MI and consistently outperforms baselines in ME, continuous-ME, cost-ME, and node removal. PACIFIER-RL proves especially effective when long-horizon costs or structural consequences dominate immediate gains.