Adversarial Attacks on Medical Hyperspectral Imaging Exploiting Spectral-Spatial Dependencies and Multiscale Features
arXiv:2601.07056v2 Announce Type: replace-cross
Abstract: Medical hyperspectral imaging (MHSI) has shown strong potential for disease diagnosis by capturing spectral-spatial information of tissues. While deep learning has substantially improved MHSI classification accuracy, its robustness remains limited due to the well-known trade-off between accuracy and robustness in Deep Neural Networks (DNNs). This issue is particularly critical in MHSI, where reliable prediction depends on local tissue relationships and multiscale spectral-spatial structures. A practical way to improve robustness is to identify the most unstable adversarial examples and incorporate them into adversarial training. However, existing attack methods do not sufficiently exploit these MHSI-specific properties, leading to suboptimal attack effectiveness and limited value for robustness enhancement. To address this gap, we propose a structured adversarial attack framework for MHSI that progressively models its local spectral-spatial dependencies and multiscale hierarchical representations. The proposed method generates anatomically consistent perturbations by modeling neighborhood dependencies and hierarchical spectral-spatial features. Experiments on the brain and choledoch datasets show that our method more effectively degrades lesion-related classification performance in critical tumor regions than existing baselines while maintaining low perturbation magnitude. These results reveal a clinically relevant robustness weakness in current MHSI models and provide stronger adversarial samples for developing targeted defense strategies.