Out-of-the-box: Black-box Causal Attacks on Object Detectors
📰 ArXiv cs.AI
arXiv:2512.03730v2 Announce Type: replace-cross Abstract: Adversarial perturbations are a useful way to expose vulnerabilities in object detectors. Existing perturbation methods are frequently white-box, architecture specific and use a loss function. More importantly, while they are often successful, it is rarely clear why they work. Insights into the mechanism of this success would allow developers to understand and analyze these attacks, as well as fine-tune the model to prevent them. This pap
DeepCamp AI