Modeling Biomechanical Constraint Violations for Language-Agnostic Lip-Sync Deepfake Detection

arXiv:2604.16808v1 Announce Type: new Abstract: Current lip-sync deepfake detectors rely on pixel-level artifacts or audio-visual correspondence, failing to generalize across languages because these cues encode data-dependent patterns rather than universal physical laws. We identify a more fundamental principle: generative models do not enforce the biomechanical constraints of authentic orofacial articulation, producing measurably elevated temporal lip variance -- a signal we term temporal lip jitter -- that is empirically consistent across the speaker's language, ethnicity, and recording conditions. We instantiate this principle through BioLip, a lightweight framework operating on 64 perioral landmark coordinates extracted by MediaPipe.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top