RbtAct: Rebuttal as Supervision for Actionable Review Feedback Generation

📰 ArXiv cs.AI

arXiv:2603.09723v2 Announce Type: replace-cross Abstract: Large language models (LLMs) are increasingly used across the scientific workflow, including to draft peer-review reports. However, many AI-generated reviews are superficial and insufficiently actionable, leaving authors without concrete, implementable guidance and motivating the gap this work addresses. We propose RbtAct, which targets actionable review feedback generation and places existing peer review rebuttal at the center of learnin

Published 29 Apr 2026
Read full paper → ← Back to Reads