SCURank: Ranking Multiple Candidate Summaries with Summary Content Units for Enhanced Summarization

📰 ArXiv cs.AI

arXiv:2604.19185v1 Announce Type: cross Abstract: Small language models (SLMs), such as BART, can achieve summarization performance comparable to large language models (LLMs) via distillation. However, existing LLM-based ranking strategies for summary candidates suffer from instability, while classical metrics (e.g., ROUGE) are insufficient to rank high-quality summaries. To address these issues, we introduce \textbf{SCURank}, a framework that enhances summarization by leveraging \textbf{Summary

Published 22 Apr 2026
Read full paper → ← Back to Reads