BoostLoRA: Growing Effective Rank by Boosting Adapters

📰 ArXiv cs.AI

arXiv:2604.27308v1 Announce Type: cross Abstract: Parameter-efficient fine-tuning (PEFT) methods face a tradeoff between adapter size and expressivity: ultra-low-parameter adapters are confined to fixed low-rank subspaces, capping performance even with extended training. We propose BoostLoRA, a gradient-boosting framework that overcomes this limit by iteratively training and merging minimal adapters on the examples the current model gets wrong. A ROTATE SVD basis strategy assigns each round to a

Published 1 May 2026
Read full paper → ← Back to Reads