Images Amplify Misinformation Sharing in Vision-Language Models
arXiv:2505.13302v2 Announce Type: replace
Abstract: As language and vision-language models (VLMs) become central to information access and online interaction, concerns grow about their potential to amplify misinformation. Human studies show that images boost the perceived credibility and shareability of information, raising the question of whether VLMs exhibit the same vulnerability. We present the first study examining how images influence VLMs' propensity to reshare news content, how this effect varies across model families, and how persona conditioning and content attributes modulate such behavior. We develop a jailbreaking-inspired prompting strategy that bypasses VLMs' default refusals to engage with controversial news, allowing them to generate resharing decisions across diverse topics and elicited traits, including antisocial ones. We evaluate four state-of-the-art VLMs on a novel multimodal dataset of fact-checked political news from PolitiFact, paired with images and ground-truth veracity labels. Our experiments show that image presence increases resharing rates by 14.5% for false news and 5.3% for true news. Persona conditioning further modulates this effect: Dark Triad traits amplify resharing of false news, whereas Republican-aligned profiles reduce sensitivity to veracity. Among the tested models, Claude-3-Haiku demonstrates the greatest robustness to visual misinformation. These findings reveal that VLMs replicate human-like biases in response to images, underscoring emerging risks for multimodal AI systems. They point to the need for evaluation frameworks and mitigation strategies that account for visual influence and persona-driven variability, particularly in sociotechnical settings where AI systems shape public discourse and information sharing.