Beyond the Training Distribution: Mapping Generalization Boundaries in Neural Program Synthesis

📰 ArXiv cs.AI

arXiv:2604.27551v1 Announce Type: cross Abstract: Large-scale transformers achieve impressive results on program synthesis benchmarks, yet their true generalization capabilities remain obscured by data contamination and opaque training corpora. To rigorously assess whether models are truly generalizing or merely retrieving memorized templates, we introduce a strictly controlled program synthesis environment based on a domain-specific arithmetic grammar. By systematically enumerating and evaluati

Published 1 May 2026
Read full paper → ← Back to Reads