CoDe-R: Refining Decompiler Output with LLMs via Rationale Guidance and Adaptive Inference

📰 ArXiv cs.AI

arXiv:2604.12913v1 Announce Type: cross Abstract: Binary decompilation is a critical reverse engineering task aimed at reconstructing high-level source code from stripped executables. Although Large Language Models (LLMs) have recently shown promise, they often suffer from "logical hallucinations" and "semantic misalignment" due to the irreversible semantic loss during compilation, resulting in generated code that fails to re-execute. In this study, we propose Cognitive Decompiler Refinement wit

Published 15 Apr 2026
Read full paper → ← Back to Reads