Reasoning Models Will Sometimes Lie About Their Reasoning
📰 ArXiv cs.AI
arXiv:2601.07663v3 Announce Type: replace Abstract: Hint-based faithfulness evaluations have established that Large Reasoning Models (LRMs) may not say what they think: they do not always volunteer information about how key parts of the input (e.g. answer hints) influence their reasoning. Yet, these evaluations also fail to specify what models should do when confronted with hints or other unusual prompt content -- even though versions of such instructions are standard security measures (e.g. for
DeepCamp AI