Implicit Neural Representation-Based Continuous Single Image Super-Resolution: An Empirical Benchmark

arXiv:2601.17723v2 Announce Type: replace Abstract: Implicit neural representation (INR) has become the standard approach for arbitrary-scale image super-resolution (ASSR). To date, no empirical study has systematically examined the effectiveness of existing methods, nor investigated the effects of different training recipes, such as scaling laws, objective design, and optimization strategies. A rigorous empirical analysis is essential not only for benchmarking performance and revealing true gains but also for establishing the current state of ASSR, identifying saturation limits, and highlighting promising directions. We fill this gap by comparing existing techniques across diverse settings and presenting aggregated performance results on multiple image quality metrics. We contribute a unified framework for more reliable interpretation of performance comparisons and model evaluation claims. To facilitate reproducible comparisons, a unified codebase is also provided. Furthermore, we investigate the impact of carefully controlled training configurations on perceptual image quality and analyze the role of auxiliary objectives in preserving edges, textures, and fine details during training. We conclude the following key insights that have been previously overlooked: (1) Recent, more complex INR methods provide only marginal improvements over earlier methods. (2) Model performance is strongly correlated to training configurations, a factor overlooked in prior works. (3) Auxiliary objectives consistently enhance texture fidelity across architectures compared to standard L1-Loss, emphasizing the role of objective design for targeted perceptual gains. (4) Scaling laws apply to INR-based ASSR, confirming predictable gains with increased model complexity, training compute, and data diversity.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top