Exploring Knowledge Conflicts for Faithful LLM Reasoning: Benchmark and Method
📰 ArXiv cs.AI
arXiv:2604.11209v1 Announce Type: cross Abstract: Large language models (LLMs) have achieved remarkable success across a wide range of applications especially when augmented by external knowledge through retrieval-augmented generation (RAG). Despite their widespread adoption, recent studies have shown that LLMs often struggle to perform faithful reasoning when conflicting knowledge is retrieved. However, existing work primarily focuses on conflicts between external knowledge and the parametric k
DeepCamp AI