Adversarial Evasion Attacks on Computer Vision using SHAP Values

📰 ArXiv cs.AI

arXiv:2601.10587v3 Announce Type: replace-cross Abstract: The paper introduces a white-box attack on computer vision models using SHAP values. It demonstrates how adversarial evasion attacks can compromise the performance of deep learning models by reducing output confidence or inducing misclassifications. Such attacks are particularly insidious as they can deceive the perception of an algorithm while eluding human perception due to their imperceptibility to the human eye. The proposed attack le

Published 13 Apr 2026
Read full paper → ← Back to Reads