PVD-ONet: A Multi-scale Neural Operator Method for Singularly Perturbed Boundary Layer Problems

arXiv:2507.21437v2 Announce Type: replace Abstract: Physics-informed neural networks and Physics-informed DeepONet excel in solving partial differential equations; however, they often fail to converge for singularly perturbed problems. To address this, we propose two novel frameworks, Prandtl-Van Dyke neural network(PVD-Net) and its operator learning extension Prandtl-Van Dyke Deep Operator Network (PVD-ONet), which rely solely on governing equations without data. To address varying task-specific requirements, both PVD-Net and PVD-ONet are developed in two distinct versions, tailored respectively for stability-focused and high-accuracy modeling. The leading-order PVD-Net adopts a two-network architecture combined with Prandtl's matching condition, targeting stability-prioritized scenarios. The high-order PVD-Net employs a five-network design with Van Dyke's matching principle to capture fine-scale boundary layer structures, making it ideal for high-accuracy scenarios. PVD-ONet generalizes PVD-Net to the operator learning setting by assembling multiple DeepONet modules, directly mapping initial conditions to solution operators and enabling instant predictions for an entire family of boundary layer problems without retraining. Numerical experiments (second-order equations with constant and variable coefficients, and internal layer problems) show that the proposed methods consistently outperform existing baselines. Moreover, beyond forward prediction, the proposed framework can be extended to inverse problems. It enables the inference of the scaling exponent governing boundary layer thickness from sparse data, providing potential for practical applications.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top