Difference between revisions of "Offense - Adversarial Threats/Attacks"

From
Jump to: navigation, search
Line 6: Line 6:
 
______________________________________________________
 
______________________________________________________
  
 +
* [http://arxiv.org/abs/1412.6572 Explaining and Harnessing Adversarial Examples | Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy]
 +
* [http://blog.openai.com/adversarial-example-research/ Attacking Machine Learning with Adversarial Examples | OpenAI - By Ian Goodfellow, Nicolas Papernot, Sandy Huang, Yan Duan, Pieter Abbeel & Jack Clark]
 +
* [http://www.cleverhans.io/ cleverhans - library for benchmarking the vulnerability of machine learning models to adversarial examples blog]
 
* [http://github.com/nababora/advML Adversarial Machine Learning for Anti-Malware Software | nababora @ GitHub]
 
* [http://github.com/nababora/advML Adversarial Machine Learning for Anti-Malware Software | nababora @ GitHub]
 
* [http://evademl.org/ EvadeML.org | University of Virginia]
 
* [http://evademl.org/ EvadeML.org | University of Virginia]
Line 13: Line 16:
 
** [http://pralab.diee.unica.it/en/ALFASVMLib adversarial label flip attacks against Support Vector Machines (ALFASVMLib) | PRA Lab]
 
** [http://pralab.diee.unica.it/en/ALFASVMLib adversarial label flip attacks against Support Vector Machines (ALFASVMLib) | PRA Lab]
  
 +
Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines.
  
 
<youtube>4rFOkpI0Lcg</youtube>
 
<youtube>4rFOkpI0Lcg</youtube>

Revision as of 19:46, 11 June 2018