Outlier-robust Diffusion Posterior Sampling for Bayesian Inverse Problems
arXiv:2602.02045v2 Announce Type: replace
Abstract: Diffusion models have emerged as powerful learned priors for Bayesian inverse problems (BIPs). Diffusion-based solvers rely on a presumed likelihood for the observations in BIPs to guide the generation process. Likelihood misspecification is common in practical BIPs and is known to degrade recovery performance, particularly under outlier contamination. We investigate this problem by first characterizing the induced posterior deviation and proving the stability of diffusion-based solvers for linear BIPs. Our stability analysis further reveals potential robustness deficiencies of existing diffusion-based solvers under outlier-contaminated measurements. To address this issue, we propose a simple yet effective solution: robust diffusion posterior sampling, which is provably outlier-robust for linear BIPs and compatible with existing gradient-based posterior samplers. Empirical results from scientific inverse problems and natural image tasks demonstrate the effectiveness and robustness of our method, with consistent performance gains in challenging scenarios involving outlier contamination for both linear and nonlinear tasks.