Toward Scalable Audio Description Quality Control: A Workflow for Evaluating Human and VLM Raters

arXiv:2602.01390v2 Announce Type: replace-cross Abstract: Digital video is central to communication, education, and entertainment, but without audio description (AD), blind and low-vision users are excluded. While crowdsourced platforms and vision-language models (VLMs) expand AD production, quality is rarely checked systematically. Existing evaluations rely on NLP metrics and short-clip guidelines, leaving open the question of how to assess long-form AD quality at scale. To address this, we developed a methodological workflow using Item Response Theory to evaluate VLM and human rater proficiency against expert-established ground truth. Evaluations were based on a six-dimensional framework, grounded in professional guidelines and shaped by insights from our accessibility experts and blind consultants. Findings suggest that top-performing VLMs can approximate ground-truth ratings at levels comparable to human raters. However, qualitative analysis reveals that VLM reasoning is less reliable and actionable than that of human respondents. These insights underscore the potential of hybrid evaluation systems that leverage VLMs alongside human oversight, offering a path toward scalable AD quality control.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top