Benign Overfitting in Adversarial Training for Vision Transformers

📰 ArXiv cs.AI

arXiv:2604.19724v1 Announce Type: cross Abstract: Despite the remarkable success of Vision Transformers (ViTs) across a wide range of vision tasks, recent studies have revealed that they remain vulnerable to adversarial examples, much like Convolutional Neural Networks (CNNs). A common empirical defense strategy is adversarial training, yet the theoretical underpinnings of its robustness in ViTs remain largely unexplored. In this work, we present the first theoretical analysis of adversarial tra

Published 22 Apr 2026
Read full paper → ← Back to Reads