Benchmarking Wireless Representations: High-Dimensional vs. Compressed Embeddings for Efficiency and Robustness
arXiv:2605.02009v1 Announce Type: cross
Abstract: Building on recent advances in representation learning for wireless channels, this work investigates the cost-benefit trade-offs of high-dimensional channel embeddings in practical systems. We benchmark multiple wireless representations: high-dimensional learned embeddings from a wireless foundation model, compact autoencoder-based representations with significantly lower dimensionality, and raw data baselines, evaluating their performance across diverse downstream tasks. We then systematically analyze data efficiency, noise robustness, and computational complexity, explicitly characterizing the resource overhead associated with high-dimensional embeddings. Beyond standard tasks such as line-of-sight/non-line-of-sight (LoS/NLoS) classification and beam selection, we introduce power allocation as a new downstream task. Our results reveal clear trade-offs: while high-dimensional embeddings can perform well in few-shot regimes for certain tasks, they incur substantial latency and parameter overhead. In contrast, compressed latent representations learned by autoencoders demonstrate improved noise robustness and more stable performance across tasks, while significantly reducing computational and transmission costs.