Visual Self-Fulfilling Alignment: Shaping Safety-Oriented Personas via Threat-Related Images

📰 ArXiv cs.AI

arXiv:2603.08486v2 Announce Type: replace-cross Abstract: Multimodal large language models (MLLMs) face safety misalignment, where visual inputs enable harmful outputs. To address this, existing methods require explicit safety labels or contrastive data; yet, threat-related concepts are concrete and visually depictable, while safety concepts, like helpfulness, are abstract and lack visual referents. Inspired by the Self-Fulfilling mechanism underlying emergent misalignment, we propose Visual Sel

Published 16 Apr 2026
Read full paper → ← Back to Reads