Difference between revisions of "Offense - Adversarial Threats/Attacks"

From
Jump to: navigation, search
Line 11: Line 11:
 
* [http://github.com/cchio/deep-pwning Deep-pwning/Metasploit | Clarence Chio]
 
* [http://github.com/cchio/deep-pwning Deep-pwning/Metasploit | Clarence Chio]
 
* [http://evademl.org/ EvadeML.org | University of Virginia]
 
* [http://evademl.org/ EvadeML.org | University of Virginia]
* [http://arxiv.org/pdf/1611.04786.pdf AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack .pdf]
+
* [http://arxiv.org/pdf/1611.04786.pdf AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack]
 
* [http://pralab.diee.unica.it/en Pattern Recognition and Applications Lab (PRA Lab)]
 
* [http://pralab.diee.unica.it/en Pattern Recognition and Applications Lab (PRA Lab)]
 
** [http://pralab.diee.unica.it/en/AdversariaLib AdversariaLib | PRA Lab]
 
** [http://pralab.diee.unica.it/en/AdversariaLib AdversariaLib | PRA Lab]
 
** [http://pralab.diee.unica.it/en/ALFASVMLib adversarial label flip attacks against Support Vector Machines (ALFASVMLib) | PRA Lab]
 
** [http://pralab.diee.unica.it/en/ALFASVMLib adversarial label flip attacks against Support Vector Machines (ALFASVMLib) | PRA Lab]
  
Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. Myth: An attacker must have access to the model to generate adversarial examples.  Fact: Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to. -[http://www.kdnuggets.com/2015/07/deep-learning-adversarial-examples-misconceptions.htmlDeep Learning Adversarial Examples – Clarifying Misconceptions | Goodfellow et al. ]
+
Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. Myth: An attacker must have access to the model to generate adversarial examples.  Fact: Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to. -[http://www.kdnuggets.com/2015/07/deep-learning-adversarial-examples-misconceptions.html Deep Learning Adversarial Examples – Clarifying Misconceptions | Goodfellow et al. ]
  
 
<youtube>4rFOkpI0Lcg</youtube>
 
<youtube>4rFOkpI0Lcg</youtube>
Line 27: Line 27:
 
<youtube>dfgOar_jaG0</youtube>
 
<youtube>dfgOar_jaG0</youtube>
  
 +
== Papers ==
 
Papernot et al. [http://arxiv.org/abs/1605.07277 Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples]
 
Papernot et al. [http://arxiv.org/abs/1605.07277 Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples]
  
Line 38: Line 39:
  
 
Goodfellow et al. [http://arxiv.org/abs/1412.6572 Explaining and Harnessing Adversarial Examples]
 
Goodfellow et al. [http://arxiv.org/abs/1412.6572 Explaining and Harnessing Adversarial Examples]
 
Goodfellow et al. [http://www.kdnuggets.com/2015/07/deep-learning-adversarial-examples-misconceptions.htmlDeep Learning Adversarial Examples – Clarifying Misconceptions]
 
  
 
Biggio et al. [http://proceedings.mlr.press/v20/biggio11/biggio11.pdf Support Vector Machines Under Adversarial Label Noise]
 
Biggio et al. [http://proceedings.mlr.press/v20/biggio11/biggio11.pdf Support Vector Machines Under Adversarial Label Noise]
  
 
Biggio et al. [http://arxiv.org/abs/1206.6389 Poisoning Attacks against Support Vector Machines]
 
Biggio et al. [http://arxiv.org/abs/1206.6389 Poisoning Attacks against Support Vector Machines]
 +
 +
Eykholt et al. [http://arxiv.org/pdf/1707.08945.pdf Robust Physical-World Attacks on Deep Learning Visual Classification]
  
 
Szegedy et al. [http://arxiv.org/abs/1312.6199 Intriguing properties of neural networks]
 
Szegedy et al. [http://arxiv.org/abs/1312.6199 Intriguing properties of neural networks]

Revision as of 09:09, 26 June 2018

Youtube search...

______________________________________________________

Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. Myth: An attacker must have access to the model to generate adversarial examples. Fact: Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to. -Deep Learning Adversarial Examples – Clarifying Misconceptions | Goodfellow et al.

Papers

Papernot et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

Papernot et al. The Limitations of Deep Learning in Adversarial Settings

Papernot et al. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples

Papernot et al. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks

Papernot et al. Adversarial Examples in Machine Learning

Goodfellow et al. Explaining and Harnessing Adversarial Examples

Biggio et al. Support Vector Machines Under Adversarial Label Noise

Biggio et al. Poisoning Attacks against Support Vector Machines

Eykholt et al. Robust Physical-World Attacks on Deep Learning Visual Classification

Szegedy et al. Intriguing properties of neural networks

Grosse et al. Adversarial Perturbations Against Deep Neural Networks for Malware Classification

Nguyen et al. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images

Xu et al. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers

Kantchelian et al. Evasion and Hardening of Tree Ensemble Classifiers

Ororbia II et al. Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization

Jin et al. Robust Convolutional Neural Networks under Adversarial Noise

Marco Barreno et al. Can Machine Learning Be Secure?

J.D. Tygar, Ling Huang et al. Adversarial Machine Learning

Huang Xiao et al. Adversarial and Secure Machine Learning

William Uther et al.Adversarial Reinforcement Learning

Alexey Kurakin et al. Adversarial examples in the physical world

Pavel Laskov et al. Machine Learning in Adversarial Environments

Weaponizing Machine Learning

Youtube search...