Conflicts Make Large Reasoning Models Vulnerable to Attacks
📰 ArXiv cs.AI
arXiv:2604.09750v1 Announce Type: cross Abstract: Large Reasoning Models (LRMs) have achieved remarkable performance across diverse domains, yet their decision-making under conflicting objectives remains insufficiently understood. This work investigates how LRMs respond to harmful queries when confronted with two categories of conflicts: internal conflicts that pit alignment values against each other and dilemmas, which impose mutually contradictory choices, including sacrificial, duress, agent-
DeepCamp AI