GeomHair: Reconstruction of Hair Strands from Colorless 3D Scans
arXiv:2505.05376v3 Announce Type: replace
Abstract: We propose a novel method that reconstructs hair strands directly from colorless 3D scans by leveraging multi-modal hair orientation extraction. Hair strand reconstruction is a fundamental problem in computer vision and graphics, essential for high-fidelity digital avatar synthesis, animation, and AR/VR applications. However, accurately recovering hair strands from raw scan data remains challenging due to the complex and fine-grained structure of human hair, and none of the existing methods operate on colorless 3D geometry alone. To address this gap, our method directly identifies sharp surface features on the scan and estimates strand orientation using a neural 2D line detector applied to the renderings of scan shading. Additionally, we incorporate a diffusion prior trained on a diverse set of synthetic hair scans, refined with a noise schedule, and adapted to the reconstructed contents via a scan-specific text prompt. We demonstrate that this combination of supervision signals enables accurate reconstruction of both simple and intricate hairstyles from geometry alone. By enabling strand extraction from 3D scans, we compile Strands400, the largest publicly available dataset of hair strands with detailed surface geometry extracted from real-world data, comprising reconstructions from 400 subjects' scans. Strands400 enables training data-driven generative models for downstream tasks such as image-to-strands and text-to-strands. Moreover, our method applies to designer mesh assets, supporting a practical CG workflow where artists model hair as meshes and need strand-level representations for simulation and rendering. All code and data will be released for research purposes on https://seva100.github.io/GeomHair/.