ShadowPEFT: Shadow Network for Parameter-Efficient Fine-Tuning

📰 ArXiv cs.AI

arXiv:2604.19254v1 Announce Type: cross Abstract: Parameter-efficient fine-tuning (PEFT) reduces the training cost of full-parameter fine-tuning for large language models (LLMs) by training only a small set of task-specific parameters while freezing the pretrained backbone. However, existing approaches, such as Low-Rank Adaptation (LoRA), achieve adaptation by inserting independent low-rank perturbations directly to individual weights, resulting in a local parameterization of adaptation. We prop

Published 22 Apr 2026
Read full paper → ← Back to Reads