Unrolling Graph-based Douglas-Rachford Algorithm for Image Interpolation with Informed Initialization
arXiv:2509.11926v4 Announce Type: replace
Abstract: Conventional deep neural nets (DNNs) initialize network parameters at random and then optimize each one via stochastic gradient descent (SGD), resulting in substantial risk of poor-performing local minima. Focusing on image interpolation and leveraging a recent theorem that maps a (pseudo-)linear interpolator {\Theta} to a directed graph filter that is a solution to a corresponding MAP problem with a graph shift variation (GSV) prior, we first initialize a directed graph adjacency matrix A given a known interpolator {\Theta}, establishing a baseline performance. Then, towards further gain, we learn perturbation matrices P and P(2) from data to augment A, whose restoration effects are implemented progressively via Douglas-Rachford (DR) iterations, which we unroll into a lightweight and interpretable neural net. Experiments on different image interpolation scenarios demonstrate state-of-the-art performance, while drastically reducing network parameters and inference complexity.