G-Loss: Graph-Guided Fine-Tuning of Language Models
arXiv:2604.25853v1 Announce Type: new
Abstract: Traditional loss functions, including cross-entropy, contrastive, triplet, and su pervised contrastive losses, used for fine-tuning pre-trained language models such as BERT, operate only within local nei…