Prototype-Grounded Concept Models for Verifiable Concept Alignment
arXiv:2604.16076v1 Announce Type: cross
Abstract: Concept Bottleneck Models (CBMs) aim to improve interpretability in Deep Learning by structuring predictions through human-understandable concepts, but they provide no way to verify whether learned con…