GF-Score: Certified Class-Conditional Robustness Evaluation with Fairness Guarantees
📰 ArXiv cs.AI
arXiv:2604.12757v1 Announce Type: cross Abstract: Adversarial robustness is essential for deploying neural networks in safety-critical applications, yet standard evaluation methods either require expensive adversarial attacks or report only a single aggregate score that obscures how robustness is distributed across classes. We introduce the \emph{GF-Score} (GREAT-Fairness Score), a framework that decomposes the certified GREAT Score into per-class robustness profiles and quantifies their dispari
DeepCamp AI