Aligning Multi-Dimensional Preferences via Relevance Feedback: An Effortless and Training-Free Framework for Text-to-Image Diffusion
arXiv:2603.14936v2 Announce Type: replace
Abstract: Aligning generated images with users' latent visual preferences remains a fundamental challenge in text-to-image diffusion models. Existing methods fall short: training-based approaches incur prohibitive costs and lack flexibility, while inference methods using textual feedback impose heavy cognitive burdens. Recent binary feedback methods reduce effort but force Foundation Models (FMs) to infer preferences semantically. During multi-dimensional alignment, FMs suffer from inference overload and fail to accurately attribute individual feature contributions under conflicting user signals. Consequently, a low-cost, low-cognitive-load framework for multi-dimensional alignment remains critically absent.To address this, we propose a Relevance Feedback-Driven (RFD) framework, adapting the relevance feedback mechanism from information retrieval to diffusion models. RFD replaces explicit dialogue with implicit visual feedback, enabling effortless expression of multi-dimensional preferences. To tackle inference overload, RFD decouples the process into independent single-feature preference inference tasks. Furthermore, to overcome FMs' inability to attribute features under conflicting signals, RFD employs rigorous mathematical methods (Odds Ratio and Cohen's d) to quantify feature divergence between "liked" and "disliked" images. This achieves the accurate, transparent feature attribution that FMs fundamentally lack.Crucially, RFD operates entirely within the external text space, making it strictly training-free and model-agnostic. This provides a universal plug-and-play solution without prohibitive fine-tuning costs. Extensive experiments demonstrate that RFD effectively captures true visual intent, significantly outperforming baseline approaches.