Neural Bridge Processes
arXiv:2508.07220v2 Announce Type: replace-cross
Abstract: Learning stochastic functions from partially observed context-target pairs requires models that are expressive, uncertainty-aware, and strongly conditioned on inputs. Neural Diffusion Processes (NDPs) improve expressivity with denoising diffusion, but their forward process is input-independent; inputs only enter the reverse denoiser, so the noisy training states themselves do not encode the conditioning inputs. We propose Neural Bridge Processes (NBPs), which replace the unconditional forward kernel with an input-anchored bridge trajectory. When input and output dimensions differ, NBP learns an output-space anchor $a_\psi(x)=P_\psi(x)$, allowing coordinates or other inputs to guide the generative path without changing the denoising backbone. We show theoretically that process-level anchoring induces pathwise input distinguishability, injects information about x into noisy states, and creates a direct gradient pathway unavailable to NDPs. Experiments on synthetic regression, EEG, CylinderFlow, and image regression show consistent improvements. Additional ablations show that the gains come from the full bridge construction with learned alignment, and that the same input-anchored path principle transfers to Flow Matching Neural Processes. These results suggest that bridge-anchored generative paths provide a general mechanism for strengthening conditional stochastic function modeling.