Difference between revisions of "Offense - Adversarial Threats/Attacks"
| Line 28: | Line 28: | ||
Szegedy et al. [http://arxiv.org/abs/1312.6199 Intriguing properties of neural networks] | Szegedy et al. [http://arxiv.org/abs/1312.6199 Intriguing properties of neural networks] | ||
| + | |||
| + | Goodfellow et al. [http://arxiv.org/abs/1412.6572 Explaining and Harnessing Adversarial Examples] | ||
| + | |||
| + | Goodfellow et al. [http://www.kdnuggets.com/2015/07/deep-learning-adversarial-examples-misconceptions.htmlDeep Learning Adversarial Examples – Clarifying Misconceptions] | ||
| + | |||
| + | Papernot et al. [http://arxiv.org/abs/1605.07277 Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples] | ||
Papernot et al. [http://arxiv.org/abs/1511.07528 The Limitations of Deep Learning in Adversarial Settings] | Papernot et al. [http://arxiv.org/abs/1511.07528 The Limitations of Deep Learning in Adversarial Settings] | ||
| Line 33: | Line 39: | ||
Papernot et al. [http://arxiv.org/abs/1602.02697 Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples] | Papernot et al. [http://arxiv.org/abs/1602.02697 Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples] | ||
| − | + | Papernot et al. [http://arxiv.org/abs/1511.04508 Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks] | |
| + | |||
| + | Papernot et al. [https://www.usenix.org/sites/default/files/conference/protected-files/enigma17_slides_papernot.pdf Adversarial Examples in Machine Learning] | ||
| + | |||
| + | Biggio et al. [http://proceedings.mlr.press/v20/biggio11/biggio11.pdf Support Vector Machines Under Adversarial Label Noise] | ||
| − | + | Biggio et al. [http://arxiv.org/abs/1206.6389 Poisoning Attacks against Support Vector Machines] | |
Grosse et al. [http://arxiv.org/abs/1606.04435 Adversarial Perturbations Against Deep Neural Networks for Malware Classification] | Grosse et al. [http://arxiv.org/abs/1606.04435 Adversarial Perturbations Against Deep Neural Networks for Malware Classification] | ||
| Line 45: | Line 55: | ||
Kantchelian et al. [http://arxiv.org/abs/1509.07892 Evasion and Hardening of Tree Ensemble Classifiers] | Kantchelian et al. [http://arxiv.org/abs/1509.07892 Evasion and Hardening of Tree Ensemble Classifiers] | ||
| − | + | Ororbia II et al. [http://arxiv.org/abs/1601.07213 Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization] | |
| + | |||
| + | Jin et al. [http://arxiv.org/abs/1511.06306 Robust Convolutional Neural Networks under Adversarial Noise] | ||
| + | |||
| + | Marco Barreno et al. [http://bnrg.cs.berkeley.edu/~adj/publications/paper-files/asiaccs06.pdf Can Machine Learning Be Secure?] | ||
| − | + | J.D. Tygar, Ling Huang et al. [http://people.eecs.berkeley.edu/~tygar/papers/SML2/Adversarial_AISEC.pdf Adversarial Machine Learning] | |
| − | + | Huang Xiao et al. [http://pdfs.semanticscholar.org/6adb/6154e091e6448d63327eadb6159746a2710d.pdf Adversarial and Secure Machine Learning] | |
| − | + | William Uther et al.[http://www.cs.cmu.edu/~mmv/papers/03TR-advRL.pdf Adversarial Reinforcement Learning] | |
| − | + | Alexey Kurakin et al. [http://openreview.net/pdf?id=S1OufnIlx Adversarial examples in the physical world] | |
| − | + | Pavel Laskov et al. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.375.4564&rep=rep1&type=pdf Machine Learning in Adversarial Environments] | |
Revision as of 20:43, 11 June 2018
______________________________________________________
- Attacking Machine Learning with Adversarial Examples | OpenAI - By Ian Goodfellow, Nicolas Papernot, Sandy Huang, Yan Duan, Pieter Abbeel & Jack Clark
- Cleverhans - library for benchmarking the vulnerability of machine learning models to adversarial examples blog
- Adversarial Machine Learning for Anti-Malware Software | nababora @ GitHub
- Deep-pwning/Metasploit | Clarence Chio
- EvadeML.org | University of Virginia
- AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack .pdf
- Pattern Recognition and Applications Lab (PRA Lab)
Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. Myth: An attacker must have access to the model to generate adversarial examples. Fact: Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to. -Learning Adversarial Examples – Clarifying Misconceptions | Goodfellow et al.
Szegedy et al. Intriguing properties of neural networks
Goodfellow et al. Explaining and Harnessing Adversarial Examples
Goodfellow et al. Learning Adversarial Examples – Clarifying Misconceptions
Papernot et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
Papernot et al. The Limitations of Deep Learning in Adversarial Settings
Papernot et al. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples
Papernot et al. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Papernot et al. Adversarial Examples in Machine Learning
Biggio et al. Support Vector Machines Under Adversarial Label Noise
Biggio et al. Poisoning Attacks against Support Vector Machines
Grosse et al. Adversarial Perturbations Against Deep Neural Networks for Malware Classification
Nguyen et al. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
Xu et al. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers
Kantchelian et al. Evasion and Hardening of Tree Ensemble Classifiers
Ororbia II et al. Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization
Jin et al. Robust Convolutional Neural Networks under Adversarial Noise
Marco Barreno et al. Can Machine Learning Be Secure?
J.D. Tygar, Ling Huang et al. Adversarial Machine Learning
Huang Xiao et al. Adversarial and Secure Machine Learning
William Uther et al.Adversarial Reinforcement Learning
Alexey Kurakin et al. Adversarial examples in the physical world
Pavel Laskov et al. Machine Learning in Adversarial Environments