A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarial Example Researchers Need to Expand What is Meant by 'Robustness'
📰 Distill.pub
Adversarial examples are a natural consequence of machine learning models, not a bug to be fixed, and researchers should reconsider the concept of robustness
Action Steps
- Recognize that adversarial examples are a feature of machine learning models, not a bug
- Understand the limitations of current robustness measures
- Expand the definition of robustness to include distributional shifts and other real-world scenarios
- Develop new methods to improve model robustness and evaluate their effectiveness
Who Needs to Know This
Machine learning researchers and engineers benefit from understanding this concept to improve model robustness, and data scientists can apply this knowledge to develop more reliable models
Key Insight
💡 Adversarial examples are a natural consequence of machine learning models and should be considered a feature, not a bug
Share This
💡 Adversarial examples are not bugs, they're features! Rethink robustness in ML models
DeepCamp AI