A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarial Example Researchers Need to Expand What is Meant by 'Robustness'

📰 Distill.pub

Adversarial examples are a natural consequence of machine learning models, not a bug to be fixed, and researchers should reconsider the concept of robustness

advanced Published 6 Aug 2019
Action Steps
  1. Recognize that adversarial examples are a feature of machine learning models, not a bug
  2. Understand the limitations of current robustness measures
  3. Expand the definition of robustness to include distributional shifts and other real-world scenarios
  4. Develop new methods to improve model robustness and evaluate their effectiveness
Who Needs to Know This

Machine learning researchers and engineers benefit from understanding this concept to improve model robustness, and data scientists can apply this knowledge to develop more reliable models

Key Insight

💡 Adversarial examples are a natural consequence of machine learning models and should be considered a feature, not a bug

Share This
💡 Adversarial examples are not bugs, they're features! Rethink robustness in ML models
Read full paper → ← Back to News