Memory-Efficient Continual Learning with CLIP Models
arXiv:2605.03866v1 Announce Type: new
Abstract: Contrastive Language-Image Pretraining (CLIP) models excel at understanding image-text relationships but struggle with adapting to new data without forgetting prior knowledge. To address this, models are…