Bridging the Divide: End-to-End Sequence-Graph Learning
arXiv:2510.25126v2 Announce Type: replace
Abstract: Many real-world prediction tasks, particularly those involving entities such as customers or patients, involve both {sequential} and {relational} data. Each entity maintains its own sequence of events while simultaneously engaging in relationships with others. Existing methods in sequence and graph modeling often overlook one modality in favor of the other. We argue that these two facets should instead be integrated and learned jointly. We introduce BRIDGE, a unified end-to-end architecture that couples a sequence model with a graph module under a single objective, allowing gradients to flow across both components to learn task-aligned representations. To enable fine-grained interaction, we propose TOKENXATTN, a token-level cross-attention layer that facilitates message passing between specific events in neighboring sequences. Across two settings, relationship prediction and fraud detection, BRIDGE consistently outperforms static graph models, temporal graph methods, as well as sequence-only baselines on both ranking and classification metrics.