LLMs as ASP Programmers: Self-Correction Enables Task-Agnostic Nonmonotonic Reasoning

📰 ArXiv cs.AI

arXiv:2604.27960v1 Announce Type: new Abstract: Recent large language models (LLMs) have achieved impressive reasoning milestones but continue to struggle with high computational costs, logical inconsistencies, and sharp performance degradation on high-complexity problems. While neuro-symbolic methods attempt to mitigate these issues by coupling LLMs with symbolic reasoners, existing approaches typically rely on monotonic logics (e.g., SMT) that cannot represent defeasible reasoning -- essential

Published 1 May 2026
Read full paper → ← Back to Reads