Faithful Autoformalization via Roundtrip Verification and Repair
📰 ArXiv cs.AI
arXiv:2604.25031v1 Announce Type: cross Abstract: When an LLM formalizes natural language, how do we know the output is faithful? We propose a roundtrip verification approach which does not require ground-truth annotations: formalize a statement, translate the result back to natural language, re-formalize, and use a formal tool to check logical equivalence. When the two formalizations agree, this provides evidence of a faithful formalization. When they disagree, a diagnosis step identifies which
DeepCamp AI