Proximal Supervised Fine-Tuning

📰 ArXiv cs.AI

arXiv:2508.17784v2 Announce Type: replace-cross Abstract: Supervised fine-tuning (SFT) of foundation models often leads to poor generalization, where prior capabilities deteriorate after tuning on new tasks or domains. Inspired by trust-region policy optimization (TRPO) and proximal policy optimization (PPO) in reinforcement learning (RL), we propose Proximal SFT (PSFT). This fine-tuning objective incorporates the benefits of trust-region, effectively constraining policy drift during SFT while m

Published 14 Apr 2026
Read full paper → ← Back to Reads