Modeling Image-Caption Rating from Comparative Judgments

arXiv:2602.00381v2 Announce Type: replace-cross Abstract: Image caption rating is becoming increasingly important because computer-generated captions are used extensively for descriptive annotation. However, rating the accuracy of captions in describing images is time-consuming and subjective in nature. In contrast, it is often easier for people to compare (between two pairs) which image-caption pair better matches each other. In this study, we propose a machine learning framework that models such comparative judgments instead of direct ratings. The model can then be applied to rank unseen image-caption pairs in the same way as a regression model trained on direct ratings. Inspired by a state-of-the-art regression approach, we extracted visual and text features using a pre-trained ViLBERT model and tweaked the learning parameters of the baseline model to improve the model performance. This new regression model (with Kendall's $\tau_c=0.812$) outperformed the baseline model (with Kendall's $\tau_c=0.758$) on the VICR dataset. The same model structure was applied to the comparative learning framework. Trained on comparative judgments (image-caption pair A better matches each other than image-caption pair B), the comparative learning model achieved a performance similar (with Kendall's $\tau_c=0.804$) to that of the regression model. In addition, a small-scale human subject study was conducted to compare the cost and quality of direct ratings, pairwise comparisons, and same-image comparisons. The results showed that comparative judgments yielded faster results and greater agreement among human annotators than direct ratings. These results suggest that collecting comparative judgments instead of direct ratings as training data labels is promising for lower annotation costs and greater consistency. The model trained on such comparative judgments can perform as well as the model trained on direct ratings.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top