Exploring Self-Supervised Learning with U-Net Masked Autoencoders and EfficientNet-B7 for Improved Gastrointestinal Abnormality Classification in Video Capsule Endoscopy

arXiv:2410.19899v2 Announce Type: replace Abstract: Video Capsule Endoscopy (VCE) has become an indispensable diagnostic tool for gastrointestinal (GI) disorders due to its non-invasive nature and ability to capture high-resolution images of the small intestine. However, the enormous volume of data generated during a single procedure makes manual inspection labor-intensive, time-consuming, and prone to inter-observer variability. Automated analysis using deep learning offers a promising solution, but its effectiveness is often limited by data imbalance and the high cost of labeled medical data. In this work, we propose a novel framework that combines self-supervised learning through a U-Net-based masked autoencoder with supervised feature extraction using EfficientNet-B7 for multi-class abnormality classification in VCE images. The U-Net model is first trained in a self-supervised manner using Gaussian noise removal and masked reconstruction to learn robust visual representations without requiring annotations. The learned encoder features are then fused with EfficientNet-B7 features to form a rich, discriminative representation for classification. We evaluate our approach on the Capsule Vision 2024 Challenge dataset consisting of ten abnormality classes and a dominant normal class. Experimental results demonstrate that the proposed fusion framework achieves a validation accuracy of 94\%, outperforming standalone architectures and attention-based fusion variants. The study highlights the effectiveness of self-supervised representation learning and feature fusion in addressing class imbalance and improving diagnostic accuracy in real-world medical imaging scenarios.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top