SEIS: Subspace-based Equivariance and Invariance Scores for Neural Representations

arXiv:2602.04054v2 Announce Type: replace Abstract: Understanding how neural representations respond to geometric transformations is essential for evaluating whether learned features preserve meaningful spatial structure. Existing approaches primarily assess robustness primarily by comparing model outputs under transformed inputs, offering limited insight into how geometric information is organized within internal representations and failing to distinguish between information loss and re-encoding. In this work, we introduce SEIS (Subspace-based Equivariance and Invariance Scores), a subspace metric for analyzing layer-wise feature representations under geometric transformations, disentangling equivariance from invariance without requiring labels or explicit knowledge of the transformation. Through controlled experiments across diverse architectures, we uncover several consistent patterns. First, convolutional encoders exhibit a depth-wise transition from strong equivariance to increasing invariance, with both properties stabilizing within the first few training epochs. In segmentation decoders, however, equivariance tends to recover in later layers. Second, this trade-off is not intrinsic but is shaped by training decisions: data augmentation actively strengthens both equivariance and invariance simultaneously, and multi-task learning induces synergistic gains in both properties beyond what either task achieves alone. Extending our analysis beyond convolutional networks, we find that transformer-based models exhibit distinct geometric behaviors, while MLP-Mixers display intermediate characteristics.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top