UniMoCo: Unified Modality Completion for Robust Multi-Modal Embeddings

arXiv:2505.11815v2 Announce Type: replace Abstract: Current vision-language models have been explored for multi-modal embedding tasks like information retrieval. However, they face significant challenges in real-world queries and targets involving diverse modality combinations, as existing approaches often fail to align all modality combinations within a unified embedding space during training, leading to degraded performance on rare modality patterns during inference. To address this fundamental limitation, we propose UniMoCo, a novel architecture featuring a modality-completion module that generates visual features from text, thereby ensuring modality completeness for both queries and targets. Additionally, UniMoCo incorporates a specialized training strategy that aligns embeddings from both original and modality-completed inputs, thus ensuring consistent and robust embeddings for diverse modality combinations. Comprehensive experiments demonstrate that UniMoCo outperforms previous methods while exhibiting consistent robustness across diverse settings. Furthermore, we identify and quantify the inherent bias in conventional approaches caused by imbalanced modality combinations in training data, showing that our modality-completion paradigm effectively mitigates this limitation. The code is available at https://github.com/HobbitQia/UniMoCo.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top