A Scalable Multi-Task Model for Virtual Sensors

arXiv:2601.20634v2 Announce Type: replace Abstract: Virtual sensors replace expensive physical sensors in critical applications through machine learning by predicting target signals from available measurements. Existing virtual sensor approaches require application-specific models with hand-selected inputs for each sensor, cannot leverage task synergies, and lack consistent benchmarks. While emerging time series foundation models offer general-purpose, pretrained solutions in other domains, they are computationally expensive and limited to predicting their input signals, making them incompatible with virtual sensors. We introduce the first multi-task model for virtual sensors addressing both limitations. Our unified model can simultaneously predict diverse virtual sensors exploiting synergies while maintaining computational efficiency. It learns relevant input signals for each virtual sensor, eliminating expert knowledge requirements while adding explainability. In our large-scale evaluation on three standard benchmarks and an application-specific dataset with over 18 billion samples, our architecture reduces computation time by up to 415x and memory requirements by 951x, while maintaining or even improving predictive quality compared to unified baselines. Compared to existing isolated models for a single virtual sensor, our unified approach generates superior predictions at similar inference speed while scaling gracefully to hundreds of virtual sensors with nearly constant parameter count, enabling practical deployment in large-scale sensor networks.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top