Look Beyond Saliency: Low-Attention Guided Dual Encoding for Video Semantic Search

arXiv:2605.06229v1 Announce Type: new Abstract: Video semantic search in densely crowded scenes remains a challenging task due to visual encoders tendency to prioritize salient foreground regions while neglecting contextually important, background areas. We propose an Inverse Attention Embedding mechanism that explicitly captures and highlights these overlooked regions. By combining inverse attention embeddings with traditional visual embeddings, our method significantly enhances semantic retrieval performance without additional training. Initial experiments and ablation studies demonstrate promising improvements over existing approaches in recall for video semantic search in crowded environments.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top