How to Teach Large Multimodal Models New Skills
📰 ArXiv cs.AI
arXiv:2510.08564v2 Announce Type: replace Abstract: How can we teach large multimodal models (LMMs) new skills without erasing prior abilities? We study sequential fine-tuning on five target skills while monitoring general ability on eight held-out benchmarks across three model families. Surprisingly, we find that performance lost on held-out tasks after fine-tuning on one skill can partly recover when the model is subsequently tuned on a different skill. We trace this behavior to a measurable s
DeepCamp AI