Beyond Output Correctness: Benchmarking and Evaluating Large Language Model Reasoning in Coding Tasks

📰 ArXiv cs.AI

arXiv:2604.12379v1 Announce Type: cross Abstract: Large language models (LLMs) increasingly rely on explicit reasoning to solve coding tasks, yet evaluating the quality of this reasoning remains challenging. Existing reasoning evaluators are not designed for coding, and current benchmarks focus primarily on code generation, leaving other coding tasks largely unexplored. We introduce CodeRQ-Bench, the first benchmark for evaluating LLM reasoning quality across three coding task categories: genera

Published 15 Apr 2026
Read full paper → ← Back to Reads