Pretrained Event Classification Model for High Energy Physics Analysis

arXiv:2412.10665v2 Announce Type: replace-cross Abstract: We introduce a foundation model for event classification in high-energy physics, built on a Graph Neural Network architecture and trained on 120 million simulated proton-proton collision events spanning 12 distinct physics processes. The model is pretrained to learn a general and robust representation of collision data using challenging multiclass and multilabel classification tasks. Its performance is evaluated across seven event classification tasks, which include new physics processes not encountered during pretraining as well as ATLAS Open Data to demonstrate generalizability across different simulation frameworks, from Delphes fast simulation to full ATLAS detector simulation. Fine-tuning the pretrained model significantly improves classification performance, particularly in scenarios with limited training data, demonstrating gains in both accuracy and computational efficiency. To investigate the underlying mechanisms behind these performance improvements, we employ a representational similarity evaluation framework based on Centered Kernel Alignment. This analysis reveals that encoder-stage representations of the fine-tuned model remain similar to those of the baseline, while intermediate graph processing layers diverge substantially, indicating that fine-tuning preserves general-purpose encoders while developing fundamentally different message-passing pathways to arrive at superior task performance.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top