Offense - Adversarial Threats/Attacks
______________________________________________________
- Attacking Machine Learning with Adversarial Examples | OpenAI - By Ian Goodfellow, Nicolas Papernot, Sandy Huang, Yan Duan, Pieter Abbeel & Jack Clark
- Explaining and Harnessing Adversarial Examples | Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy
- Cleverhans - library for benchmarking the vulnerability of machine learning models to adversarial examples blog
- Adversarial Machine Learning for Anti-Malware Software | nababora @ GitHub
- EvadeML.org | University of Virginia
- AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack .pdf
- Pattern Recognition and Applications Lab (PRA Lab)
Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines.