Model-Free Neural Filtering: A Comparison with Classical Filters in Nonlinear Systems
arXiv:2601.21266v3 Announce Type: replace
Abstract: Neural network models are increasingly used for state estimation in control and decision-making, yet it remains unclear to what extent they behave as principled filters in nonlinear dynamical systems. Unlike classical filters, which rely on explicit dynamics and noise models, neural estimators can be trained purely from data. We present a systematic comparison between model-free neural estimators and classical filtering methods across multiple nonlinear scenarios. On the neural side, we evaluate Transformer-based models, recurrent neural networks, and state-space models; on the classical side, we compare against particle filters and nonlinear Kalman filters. Results show that structured state-space models (SSMs), in particular Mamba and Mamba-2, are consistently strong among neural estimators. They approach strong classical filters in several nonlinear systems and outperform weaker classical baselines without access to system models, while the evaluated neural implementations achieve substantially higher inference throughput on the tested hardware. Accurate model-based filters can still dominate when their assumptions are well matched. We attribute the relative strength of SSMs to filtering-aligned inductive bias: recursive latent-state updates make them structurally closer to classical filters under fixed parameter budgets, finite data, and long-horizon evaluation.