cs.MM, cs.RO

DA-PTQ: Drift-Aware Post-Training Quantization for Efficient Vision-Language-Action Models

arXiv:2604.11572v1 Announce Type: new
Abstract: Vision-Language-Action models (VLAs) have demonstrated strong potential for embodied AI, yet their deployment on resource-limited robots remains challenging due to high memory and computational demands. …