A Novel FACS-Aligned Anatomical Text Description Paradigm for Fine-Grained Facial Behavior Synthesis

arXiv:2603.18588v2 Announce Type: replace Abstract: Facial behavior constitutes the primary medium of human nonverbal communication. Existing synthesis methods predominantly follow two paradigms: coarse emotion category labels or one-hot Action Unit (AU) vectors from the Facial Action Coding System (FACS). Neither paradigm reliably renders fine-grained facial behaviors nor resolves anatomically implausible artifacts caused by conflicting AUs. Therefore, we propose a novel task paradigm: anatomically grounded facial behavior synthesis from FACS-based AU descriptions. This paradigm explicitly encodes FACS-defined muscle movement rules, inter-AU interactions, and conflict resolution mechanisms into natural language control signals. To enable systematic research, we develop a dynamic AU text processor, a FACS rule-based module that converts raw AU annotations into anatomically consistent natural language descriptions. Using this processor, we construct BP4D-AUText, the first large-scale text-image paired dataset for fine-grained facial behavior synthesis, comprising over 302K high-quality samples. Given that existing general semantic consistency metrics cannot capture the alignment between anatomical facial descriptions and synthesized muscle movements, we propose the Alignment Accuracy of AU Probability Distributions (AAAD), a task-specific metric that quantifies semantic consistency. Finally, we design VQ-AUFace, a robust baseline framework incorporating anatomical priors and progressive cross-modal alignment, to validate the paradigm. Extensive quantitative experiments and user studies demonstrate the paradigm significantly outperforms state-of-the-art methods, particularly in challenging conflicting AU scenarios, achieving superior anatomical fidelity, semantic consistency, and visual quality.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top