Beyond Boundary Frames: Context-Centric Video Interpolation with Audio-Visual Semantics

arXiv:2512.03590v2 Announce Type: replace Abstract: Video frame interpolation has long been challenged by limited controllability and interactivity, especially in scenarios involving fast, highly non-linear, and fine-grained motion. Although recent interactive interpolation methods have made progress, they remain largely boundary-centric and ignore auxiliary contextual signals beyond the start and end frames, leading to outputs that deviate from user-intended objectives. To address this issue, we reformulate VFI from a boundary-centric task into a context-centric generation problem. Based on this, we propose BBF (Beyond Boundary Frames), a context-centric video frame interpolation framework with decoupled multimodal conditioning, which jointly exploits endpoint-adjacent visual context, text semantics, and audio-correlated temporal dynamics. To balance endpoint consistency with context-dependent temporal evolution, BBF further introduces a multi-stream context integration mechanism, consisting of endpoint-constraint integration, evolution-prior integration, and temporal-context integration. In addition, BBF adopts a progressive training strategy to stabilize multimodal learning and improve controllable interpolation. Extensive experiments show that BBF outperforms specialized state-of-the-art methods on both generic interpolation and audio-visual synchronized generation tasks, establishing a unified framework for video frame interpolation under coordinated multimodal conditioning. The code, the model, and the interface will be released to facilitate further research.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top