Difference between revisions of "Offense - Adversarial Threats/Attacks"
| Line 7: | Line 7: | ||
* [http://blog.openai.com/adversarial-example-research/ Attacking Machine Learning with Adversarial Examples | OpenAI - By Ian Goodfellow, Nicolas Papernot, Sandy Huang, Yan Duan, Pieter Abbeel & Jack Clark] | * [http://blog.openai.com/adversarial-example-research/ Attacking Machine Learning with Adversarial Examples | OpenAI - By Ian Goodfellow, Nicolas Papernot, Sandy Huang, Yan Duan, Pieter Abbeel & Jack Clark] | ||
| − | |||
* [http://www.cleverhans.io/ Cleverhans] - library for benchmarking the vulnerability of machine learning models to adversarial examples blog | * [http://www.cleverhans.io/ Cleverhans] - library for benchmarking the vulnerability of machine learning models to adversarial examples blog | ||
* [http://github.com/nababora/advML Adversarial Machine Learning for Anti-Malware Software | nababora @ GitHub] | * [http://github.com/nababora/advML Adversarial Machine Learning for Anti-Malware Software | nababora @ GitHub] | ||
| + | * [http://github.com/cchio/deep-pwning Deep-pwning/Metasploit | Clarence Chio] | ||
* [http://evademl.org/ EvadeML.org | University of Virginia] | * [http://evademl.org/ EvadeML.org | University of Virginia] | ||
* [http://arxiv.org/pdf/1611.04786.pdf AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack .pdf] | * [http://arxiv.org/pdf/1611.04786.pdf AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack .pdf] | ||
| Line 26: | Line 26: | ||
<youtube>sFhD6ABghf8</youtube> | <youtube>sFhD6ABghf8</youtube> | ||
<youtube>dfgOar_jaG0</youtube> | <youtube>dfgOar_jaG0</youtube> | ||
| + | |||
| + | Szegedy et al. [http://arxiv.org/abs/1312.6199 Intriguing properties of neural networks] | ||
| + | Papernot et al. [http://arxiv.org/abs/1511.07528 The Limitations of Deep Learning in Adversarial Settings] | ||
| + | Papernot et al. [http://arxiv.org/abs/1602.02697 Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples] | ||
| + | Goodfellow et al. [http://arxiv.org/abs/1412.6572 Explaining and Harnessing Adversarial Examples] | ||
| + | Papernot et al. [http://arxiv.org/abs/1605.07277 Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples] | ||
| + | Grosse et al. [http://arxiv.org/abs/1606.04435 Adversarial Perturbations Against Deep Neural Networks for Malware Classification] | ||
| + | Nguyen et al. [http://arxiv.org/abs/1412.1897 Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images] | ||
| + | Xu et al. [http://www.cs.virginia.edu/~evans/pubs/ndss2016/ Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers] | ||
| + | Kantchelian et al. [http://arxiv.org/abs/1509.07892 Evasion and Hardening of Tree Ensemble Classifiers] | ||
| + | Biggio et al. [http://proceedings.mlr.press/v20/biggio11/biggio11.pdf Support Vector Machines Under Adversarial Label Noise] | ||
| + | Biggio et al. [http://arxiv.org/abs/1206.6389 Poisoning Attacks against Support Vector Machines] | ||
| + | Papernot et al. [http://arxiv.org/abs/1511.04508 Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks] | ||
| + | Ororbia II et al. [http://arxiv.org/abs/1601.07213 Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization] | ||
| + | Jin et al. [http://arxiv.org/abs/1511.06306 Robust Convolutional Neural Networks under Adversarial Noise] | ||
| + | Goodfellow et al. [http://www.kdnuggets.com/2015/07/deep-learning-adversarial-examples-misconceptions.htmlDeep Learning Adversarial Examples – Clarifying Misconceptions] | ||
Revision as of 20:07, 11 June 2018
______________________________________________________
- Attacking Machine Learning with Adversarial Examples | OpenAI - By Ian Goodfellow, Nicolas Papernot, Sandy Huang, Yan Duan, Pieter Abbeel & Jack Clark
- Cleverhans - library for benchmarking the vulnerability of machine learning models to adversarial examples blog
- Adversarial Machine Learning for Anti-Malware Software | nababora @ GitHub
- Deep-pwning/Metasploit | Clarence Chio
- EvadeML.org | University of Virginia
- AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack .pdf
- Pattern Recognition and Applications Lab (PRA Lab)
Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines.
Szegedy et al. Intriguing properties of neural networks Papernot et al. The Limitations of Deep Learning in Adversarial Settings Papernot et al. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples Goodfellow et al. Explaining and Harnessing Adversarial Examples Papernot et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples Grosse et al. Adversarial Perturbations Against Deep Neural Networks for Malware Classification Nguyen et al. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images Xu et al. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers Kantchelian et al. Evasion and Hardening of Tree Ensemble Classifiers Biggio et al. Support Vector Machines Under Adversarial Label Noise Biggio et al. Poisoning Attacks against Support Vector Machines Papernot et al. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks Ororbia II et al. Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization Jin et al. Robust Convolutional Neural Networks under Adversarial Noise Goodfellow et al. Learning Adversarial Examples – Clarifying Misconceptions