Difference between revisions of "Offense - Adversarial Threats/Attacks"

From
Jump to: navigation, search
Line 144: Line 144:
  
 
Kantchelian [http://pdfs.semanticscholar.org/4a8d/97172382144b9906e2cec69d3decb4188fb7.pdf Taming Evasions in Machine Learning Based Detection, 12 Aug 2016]  
 
Kantchelian [http://pdfs.semanticscholar.org/4a8d/97172382144b9906e2cec69d3decb4188fb7.pdf Taming Evasions in Machine Learning Based Detection, 12 Aug 2016]  
 +
 +
Chen et al. [http://arxiv.org/pdf/1712.05526.pdf Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning, 15 Dec 2017]
 +
 +
Goodfellow et al. [http://arxiv.org/pdf/1704.03453.pdf The Space of Transferable Adversarial Examples, 23 May 2017]
 +
 +
Akhtar et al. [http://arxiv.org/pdf/1801.00553.pdf Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey, 26 Feb 2018]
 +
 +
 +
  
  

Revision as of 19:54, 26 June 2018

Youtube search...

______________________________________________________


Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. Myth: An attacker must have access to the model to generate adversarial examples. Fact: Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to. -Deep Learning Adversarial Examples – Clarifying Misconceptions | Goodfellow et al.

Sources

Papernot et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

Papernot et al. The Limitations of Deep Learning in Adversarial Settings

Papernot et al. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples

Papernot et al. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks

Papernot et al. Adversarial Examples in Machine Learning

Goodfellow et al. Explaining and Harnessing Adversarial Examples

Biggio et al. Support Vector Machines Under Adversarial Label Noise

Biggio et al. Poisoning Attacks against Support Vector Machines


Szegedy et al. Intriguing properties of neural networks

Grosse et al. Adversarial Perturbations Against Deep Neural Networks for Malware Classification

Nguyen et al. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images

Xu et al. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers

Kantchelian et al. Evasion and Hardening of Tree Ensemble Classifiers

Ororbia II et al. Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization

Jin et al. Robust Convolutional Neural Networks under Adversarial Noise

Barreno et al. Can Machine Learning Be Secure?

J.D. Tygar, Ling Huang et al. Adversarial Machine Learning

Xiao et al. Adversarial and Secure Machine Learning

Uther et al.Adversarial Reinforcement Learning

Kurakin et al. Adversarial examples in the physical world

Laskov et al. Machine Learning in Adversarial Environments


W/ Papers

Eykholt et al. Robust Physical-World Attacks on Deep Learning Visual Classification

Naveiro et al. Adversarial classification: An adversarial risk analysis approach, 21 Feb 2018

Kantarcioglu et al. Adversarial Data Mining for Cyber Security, 28 Oct 2016

Al-Dujaili et al. Adversarial Deep Learning for Robust Detection of Binary Encoded Malware, 25 Mar 2018

Grosse et al. Adversarial Examples for Malware Detection, 12 Aug 2017

Kreuk et al. Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples, 13 May 2018

Keshet et al. Adversarial Examples on Discrete Sequences for Beating Whole-Binary Malware Detection, 13 Feb 2018

Goodfellow et al.Adversarial Examples that Fool both Human and Computer Vision, 22 May 2018

Yuan et al. Adversarial Examples. Attacks and Defenses for Deep Learning, 5 Jan 2018

Miller et al. Adversarial Learning. A Critical Review and Active Learning Study, 27 May 2017

Kolosnjaji et al. Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables, 12 Mar 2018

Chen et al. Automated Poisoning Attacks and Defenses in Malware Detection Systems: An Adversarial Machine Learning Approach, 31 Oct 2017

Papernot et al. Adversarial Perturbations Against Deep Neural Networks, 16 Jun 2016

Uesato et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks, 12 Jun 2018

Norton et al. Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning, 1 Aug 2017

Wang et al. Adversary Resistant Deep Neural Networks with an Application to Malware Detection, 27 Apr 2017

Stokes et al. Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models, 16 Dec 2017

Goodfellow et al. Attacking Machine Learning with Adversarial Examples, 24 Feb 2017

Carlini et al. Adversarial Examples: Targeted Attacks on Speech-to-Text, 5 Jan 2018

Shen et al. AUROR: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems, 5 Dec 2016

Hosseini et al. Blocking Transferability of Adversarial Examples in Black-Box Learning Systems, 13 Mar 2017

Raghunathan et al. Certified Defenses against Adversarial Examples, 29 Jan 2018

Rouhani et al. CuRTAIL: ChaRacterizing and Thwarting AdversarIal deep Learning, 1 Apr 2018

Paudice et al. Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection, 8 Feb 2018

Chen et al. EAD. Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples, 10 Feb 2018

Xu et al. Feature Squeezing; Detecting Adversarial Examples in Deep Neural Networks, 5 Dec 2017

Hu et al. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN, 20 Feb 2017

Chen et al. Hardening Malware Detection Systems Against Cyber Maneuvers. An Adversarial Machine Learning Approach, 13 Oct 2017

Demontis et al. Infinity-Norm Support Vector Machines Against Adversarial Label Contamination, 2017

Abhijith Introduction to Artificial intelligence for security professionals, 12 Aug 2017

Anderson et al. Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning, 26 Jan 2018

Papernot et al. On the (Statistical) Detection of Adversarial Examples, 21 Feb 2017

Papernot et al. Practical Black-Box Attacks against Machine Learning, 8 Feb 2016

Bulo et al. Randomized Prediction Games for Adversarial Machine Learning, 11 Nov 2017

Kantchelian Taming Evasions in Machine Learning Based Detection, 12 Aug 2016

Chen et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning, 15 Dec 2017

Goodfellow et al. The Space of Transferable Adversarial Examples, 23 May 2017

Akhtar et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey, 26 Feb 2018





Weaponizing Machine Learning

Youtube search...