The Surprising Effectiveness of Canonical Knowledge Distillation for Semantic Segmentation

📰 ArXiv cs.AI

arXiv:2604.25530v2 Announce Type: cross Abstract: Recent knowledge distillation (KD) methods for semantic segmentation introduce increasingly complex hand-crafted objectives, yet are typically evaluated under fixed iteration schedules. These objectives substantially increase per-iteration cost, meaning equal iteration counts do not correspond to equal training budgets. It is therefore unclear whether reported gains reflect stronger distillation signals or simply greater compute. We show that ite

Published 29 Apr 2026
Read full paper → ← Back to Reads