Tuning Language Models for Robust Prediction of Diverse User Behaviors
📰 ArXiv cs.AI
arXiv:2505.17682v2 Announce Type: replace-cross Abstract: Predicting user behavior is essential for intelligent assistant services, yet deep learning models often struggle to capture long-tailed behaviors. Large language models (LLMs), with their pretraining on vast corpora containing rich behavioral knowledge, offer promise. However, existing fine-tuning approaches tend to overfit to frequent ``anchor'' behaviors, reducing their ability to predict less common ``tail'' behaviors. In this paper,
DeepCamp AI