G-Loss: Graph-Guided Fine-Tuning of Language Models

📰 ArXiv cs.AI

arXiv:2604.25853v1 Announce Type: cross Abstract: Traditional loss functions, including cross-entropy, contrastive, triplet, and su pervised contrastive losses, used for fine-tuning pre-trained language models such as BERT, operate only within local neighborhoods and fail to account for the global semantic structure. We present G-Loss, a graph-guided loss function that incorporates semi-supervised label propagation to use structural relationships within the embedding manifold. G-Loss builds a do

Published 29 Apr 2026
Read full paper → ← Back to Reads