From Local to Global: Revisiting Structured Pruning Paradigms for Large Language Models
📰 ArXiv cs.AI
arXiv:2510.18030v2 Announce Type: replace-cross Abstract: Structured pruning is a practical approach to deploying large language models (LLMs) efficiently, as it yields compact, hardware-friendly architectures. However, the dominant local paradigm is task-agnostic: by optimizing layer-wise reconstruction rather than task objectives, it tends to preserve perplexity or generic zero-shot behavior but fails to capitalize on modest task-specific calibration signals, often yielding limited downstream
DeepCamp AI