Difference between revisions of "Offense - Adversarial Threats/Attacks"

From
Jump to: navigation, search
Line 29: Line 29:
 
== References ==
 
== References ==
  
Abadi, M. Chu , A. Goodfellow, I. McMahan, H. Mironov, I. Talwar, K. and Zhang, L. [http://arxiv.org/pdf/1607.00133.pdf Deep Learning with Differential Privacy, 24 Oct 2016]
+
Abadi, M. Chu , A. Goodfellow, I. McMahan, H. Mironov, I. Talwar, K. and Zhang, L. [http://arxiv.org/pdf/1607.00133.pdf Deep Learning with Differential Privacy], 24 Oct 2016
  
Abhijith [http://abhijith.live/introduction-to-artificial-intelligence-for-security-professionals-book/ Introduction to Artificial intelligence for security professionals, 12 Aug 2017]
+
Abhijith [http://abhijith.live/introduction-to-artificial-intelligence-for-security-professionals-book/ Introduction to Artificial intelligence for security professionals], 12 Aug 2017
  
Abramson, Myriam [http://pdfs.semanticscholar.org/b2f7/69ddcf8cae594f39e839aa29b27b98f403ca.pdf Toward Adversarial Online Learning and the Science of Deceptive Machines, 13 Sep 2017]
+
Abramson, Myriam [http://pdfs.semanticscholar.org/b2f7/69ddcf8cae594f39e839aa29b27b98f403ca.pdf Toward Adversarial Online Learning and the Science of Deceptive Machines], 13 Sep 2017
  
Akhtar et al. [http://arxiv.org/pdf/1801.00553.pdf Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey, 26 Feb 2018]
+
Akhtar et al. [http://arxiv.org/pdf/1801.00553.pdf Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey], 26 Feb 2018
  
Al-Dujaili et al. [http://arxiv.org/pdf/1801.02950.pdf Adversarial Deep Learning for Robust Detection of Binary Encoded Malware, 25 Mar 2018]
+
Al-Dujaili et al. [http://arxiv.org/pdf/1801.02950.pdf Adversarial Deep Learning for Robust Detection of Binary Encoded Malware], 25 Mar 2018
  
Anderson et al. [http://arxiv.org/pdf/1801.08917.pdf Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning, 26 Jan 2018]
+
Anderson et al. [http://arxiv.org/pdf/1801.08917.pdf Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning], 26 Jan 2018
  
Barreno et al. [http://bnrg.cs.berkeley.edu/~adj/publications/paper-files/asiaccs06.pdf Can Machine Learning Be Secure?, 21 Mar 2016]
+
Barreno et al. [http://bnrg.cs.berkeley.edu/~adj/publications/paper-files/asiaccs06.pdf Can Machine Learning Be Secure?], 21 Mar 2016
  
Biggio et al. [http://arxiv.org/pdf/1206.6389.pdf Poisoning Attacks against Support Vector Machines, 25 Mar 2013]
+
Biggio et al. [http://arxiv.org/pdf/1206.6389.pdf Poisoning Attacks against Support Vector Machines], 25 Mar 2013
  
Biggio et al. [http://proceedings.mlr.press/v20/biggio11/biggio11.pdf Support Vector Machines Under Adversarial Label Noise, 2011]
+
Biggio et al. [http://proceedings.mlr.press/v20/biggio11/biggio11.pdf Support Vector Machines Under Adversarial Label Noise], 2011
  
Brundage et al. [http://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, Feb 2018]
+
Brundage et al. [http://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation], Feb 2018
  
Bulo et al. [http://pralab.diee.unica.it/sites/default/files/bulo16-tnnls.pdf Randomized Prediction Games for Adversarial Machine Learning, 11 Nov 2017]
+
Bulo et al. [http://pralab.diee.unica.it/sites/default/files/bulo16-tnnls.pdf Randomized Prediction Games for Adversarial Machine Learning], 11 Nov 2017
  
Carlini et al. [http://arxiv.org/pdf/1801.01944.pdfAudio Adversarial Examples: Targeted Attacks on Speech-to-Text, 5 Jan 2018]
+
Carlini et al. [http://arxiv.org/pdf/1801.01944.pdfAudio Adversarial Examples: Targeted Attacks on Speech-to-Text], 5 Jan 2018
  
Chen et al. [http://arxiv.org/pdf/1706.04146.pdf Automated Poisoning Attacks and Defenses in Malware Detection Systems: An Adversarial Machine Learning Approach, 31 Oct 2017]
+
Chen et al. [http://arxiv.org/pdf/1706.04146.pdf Automated Poisoning Attacks and Defenses in Malware Detection Systems: An Adversarial Machine Learning Approach], 31 Oct 2017
  
Chen et al. [http://arxiv.org/pdf/1709.04114.pdf EAD. Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples, 10 Feb 2018]
+
Chen et al. [http://arxiv.org/pdf/1709.04114.pdf EAD. Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples], 10 Feb 2018
  
Chen et al. [http://www.researchgate.net/publication/317576889_Hardening_Malware_Detection_Systems_Against_Cyber_Maneuvers_An_Adversarial_Machine_Learning_Approach Hardening Malware Detection Systems Against Cyber Maneuvers. An Adversarial Machine Learning Approach, 13 Oct 2017]
+
Chen et al. [http://www.researchgate.net/publication/317576889_Hardening_Malware_Detection_Systems_Against_Cyber_Maneuvers_An_Adversarial_Machine_Learning_Approach Hardening Malware Detection Systems Against Cyber Maneuvers. An Adversarial Machine Learning Approach], 13 Oct 2017
  
Chen et al. [http://arxiv.org/pdf/1712.05526.pdf Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning, 15 Dec 2017]
+
Chen et al. [http://arxiv.org/pdf/1712.05526.pdf Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning], 15 Dec 2017
  
Demontis et al. [http://ceur-ws.org/Vol-1816/paper-11.pdf Infinity-Norm Support Vector Machines Against Adversarial Label Contamination, 2017]
+
Demontis et al. [http://ceur-ws.org/Vol-1816/paper-11.pdf Infinity-Norm Support Vector Machines Against Adversarial Label Contamination], 2017
  
Evtimov, I., Eykholt, K., Fernandes, E., Kohno, T., Li, B., Prakash, A., Rahmati, A., and Song, D. [http://arxiv.org/pdf/1707.08945.pdf Robust Physical-World Attacks on Deep Learning Visual Classification, 2017]
+
Evtimov, I., Eykholt, K., Fernandes, E., Kohno, T., Li, B., Prakash, A., Rahmati, A., and Song, D. [http://arxiv.org/pdf/1707.08945.pdf Robust Physical-World Attacks on Deep Learning Visual Classification], 2017
  
Goodfellow et al. [http://arxiv.org/pdf/1607.02533.pdf Adversarial examples in the physical world, 11 Feb 2017]
+
Goodfellow et al. [http://arxiv.org/pdf/1607.02533.pdf Adversarial examples in the physical world], 11 Feb 2017
  
Goodfellow et al. [http://arxiv.org/pdf/1802.08195.pdf Adversarial Examples that Fool both Human and Computer Vision, 22 May 2018]
+
Goodfellow et al. [http://arxiv.org/pdf/1802.08195.pdf Adversarial Examples that Fool both Human and Computer Vision], 22 May 2018
  
Goodfellow et al. [http://blog.openai.com/adversarial-example-research/ Attacking Machine Learning with Adversarial Examples, 24 Feb 2017]
+
Goodfellow et al. [http://blog.openai.com/adversarial-example-research/ Attacking Machine Learning with Adversarial Examples], 24 Feb 2017
  
Goodfellow et al. [http://arxiv.org/pdf/1412.6572.pdf Explaining and Harnessing Adversarial Examples, 20 Mar 2015]
+
Goodfellow et al. [http://arxiv.org/pdf/1412.6572.pdf Explaining and Harnessing Adversarial Examples], 20 Mar 2015
  
Goodfellow et al. [http://arxiv.org/pdf/1312.6199.pdf Intriguing properties of neural networks, 19 Feb 2014]
+
Goodfellow et al. [http://arxiv.org/pdf/1312.6199.pdf Intriguing properties of neural networks], 19 Feb 2014
  
Goodfellow et al. [http://arxiv.org/pdf/1704.03453.pdf The Space of Transferable Adversarial Examples, 23 May 2017]
+
Goodfellow et al. [http://arxiv.org/pdf/1704.03453.pdf The Space of Transferable Adversarial Examples], 23 May 2017
  
Goodfellow et al. [http://arxiv.org/pdf/1605.07277.pdf Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples, 24 May 2016]
+
Goodfellow et al. [http://arxiv.org/pdf/1605.07277.pdf Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples], 24 May 2016
  
Grosse et al. [http://www.patrickmcdaniel.org/pubs/esorics17.pdf Adversarial Examples for Malware Detection, 12 Aug 2017]
+
Grosse et al. [http://www.patrickmcdaniel.org/pubs/esorics17.pdf Adversarial Examples for Malware Detection], 12 Aug 2017
  
Grosse et al. [http://arxiv.org/pdf/1606.04435.pdf Adversarial Perturbations Against Deep Neural Networks for Malware Classification, 16 Jun 2016]
+
Grosse et al. [http://arxiv.org/pdf/1606.04435.pdf Adversarial Perturbations Against Deep Neural Networks for Malware Classification], 16 Jun 2016
  
Hosseini et al. [http://arxiv.org/pdf/1703.04318.pdf Blocking Transferability of Adversarial Examples in Black-Box Learning Systems, 13 Mar 2017]
+
Hosseini et al. [http://arxiv.org/pdf/1703.04318.pdf Blocking Transferability of Adversarial Examples in Black-Box Learning Systems], 13 Mar 2017
  
Hu et al. [http://arxiv.org/pdf/1702.05983.pdf Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN, 20 Feb 2017]
+
Hu et al. [http://arxiv.org/pdf/1702.05983.pdf Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN], 20 Feb 2017
  
Huang et al. [http://people.eecs.berkeley.edu/~tygar/papers/SML2/Adversarial_AISEC.pdf Adversarial Machine Learning, Oct 2011]
+
Huang et al. [http://people.eecs.berkeley.edu/~tygar/papers/SML2/Adversarial_AISEC.pdf Adversarial Machine Learning], Oct 2011
  
Jin et al. [http://arxiv.org/pdf/1511.06306.pdf Robust Convolutional Neural Networks under Adversarial Noise, 25 Feb 2016]
+
Jin et al. [http://arxiv.org/pdf/1511.06306.pdf Robust Convolutional Neural Networks under Adversarial Noise], 25 Feb 2016
  
Kantarcioglu et al. [http://www.utdallas.edu/~muratk/CCS-tutorial.pdf Adversarial Data Mining for Cyber Security, 28 Oct 2016]
+
Kantarcioglu et al. [http://www.utdallas.edu/~muratk/CCS-tutorial.pdf Adversarial Data Mining for Cyber Security], 28 Oct 2016
  
Kantchelian et al. [https://arxiv.org/pdf/1509.07892.pdf Evasion and Hardening of Tree Ensemble Classifiers, 27 May 2016]
+
Kantchelian et al. [https://arxiv.org/pdf/1509.07892.pdf Evasion and Hardening of Tree Ensemble Classifiers], 27 May 2016
  
Kantchelian [http://pdfs.semanticscholar.org/4a8d/97172382144b9906e2cec69d3decb4188fb7.pdf Taming Evasions in Machine Learning Based Detection, 12 Aug 2016]
+
Kantchelian [http://pdfs.semanticscholar.org/4a8d/97172382144b9906e2cec69d3decb4188fb7.pdf Taming Evasions in Machine Learning Based Detection], 12 Aug 2016  
  
Keshet et al. [http://www.groundai.com/project/adversarial-examples-on-discrete-sequences-for-beating-whole-binary-malware-detection/ Adversarial Examples on Discrete Sequences for Beating Whole-Binary Malware Detection, 13 Feb 2018]
+
Keshet et al. [http://www.groundai.com/project/adversarial-examples-on-discrete-sequences-for-beating-whole-binary-malware-detection/ Adversarial Examples on Discrete Sequences for Beating Whole-Binary Malware Detection], 13 Feb 2018
  
Kolosnjaji et al. [http://arxiv.org/pdf/1803.04173.pdf Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables, 12 Mar 2018]
+
Kolosnjaji et al. [http://arxiv.org/pdf/1803.04173.pdf Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables], 12 Mar 2018
  
Kreuk et al. [http://arxiv.org/pdf/1802.04528.pdf Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples, 13 May 2018]
+
Kreuk et al. [http://arxiv.org/pdf/1802.04528.pdf Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples], 13 May 2018
  
Laskov et al. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.375.4564&rep=rep1&type=pdf Machine Learning in Adversarial Environments, 28 Jun 2010]
+
Laskov et al. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.375.4564&rep=rep1&type=pdf Machine Learning in Adversarial Environments], 28 Jun 2010
  
Luo et al. [http://arxiv.org/pdf/1801.04693.pdf Towards Imperceptible and Robust Adversarial Example Attacks against Neural, 15 Jan 2018]
+
Luo et al. [http://arxiv.org/pdf/1801.04693.pdf Towards Imperceptible and Robust Adversarial Example Attacks against Neural], 15 Jan 2018
  
Madry et al. [http://arxiv.org/pdf/1706.06083.pdf Towards Deep Learning Models Resistant to Adversarial Attacks, 19 Jun 2017]
+
Madry et al. [http://arxiv.org/pdf/1706.06083.pdf Towards Deep Learning Models Resistant to Adversarial Attacks], 19 Jun 2017
  
Miller et al. [http://arxiv.org/pdf/1705.09823.pdf Adversarial Learning. A Critical Review and Active Learning Study, 27 May 2017]
+
Miller et al. [http://arxiv.org/pdf/1705.09823.pdf Adversarial Learning. A Critical Review and Active Learning Study], 27 May 2017
  
Muñoz-González et al. [http://arxiv.org/pdf/1708.08689.pdf Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization, 29 Aug 2017]
+
Muñoz-González et al. [http://arxiv.org/pdf/1708.08689.pdf Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization], 29 Aug 2017
  
Naveiro et al. [http://arxiv.org/pdf/1802.07513.pdf Adversarial classification: An adversarial risk analysis approach, 21 Feb 2018]
+
Naveiro et al. [http://arxiv.org/pdf/1802.07513.pdf Adversarial classification: An adversarial risk analysis approach], 21 Feb 2018
  
Nguyen et al. [http://arxiv.org/pdf/1412.1897.pdf Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, 2 Apr 2015]
+
Nguyen et al. [http://arxiv.org/pdf/1412.1897.pdf Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images], 2 Apr 2015
  
Norton et al. [http://arxiv.org/pdf/1708.00807.pdf Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning, 1 Aug 2017]
+
Norton et al. [http://arxiv.org/pdf/1708.00807.pdf Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning], 1 Aug 2017
  
Ororbia II et al. [http://arxiv.org/pdf/1601.07213.pdf Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization, 29 Jul 2016]
+
Ororbia II et al. [http://arxiv.org/pdf/1601.07213.pdf Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization], 29 Jul 2016
  
Papernot et al. [http://www.usenix.org/sites/default/files/conference/protected-files/enigma17_slides_papernot.pdf  Adversarial Examples in Machine Learning, 1 Feb 2017]
+
Papernot et al. [http://www.usenix.org/sites/default/files/conference/protected-files/enigma17_slides_papernot.pdf  Adversarial Examples in Machine Learning], 1 Feb 2017
  
Papernot et al. [http://arxiv.org/pdf/1606.04435.pdf Adversarial Perturbations Against Deep Neural Networks, 16 Jun 2016]
+
Papernot et al. [http://arxiv.org/pdf/1606.04435.pdf Adversarial Perturbations Against Deep Neural Networks], 16 Jun 2016
  
Papernot et al. [http://arxiv.org/pdf/1511.04508.pdf Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks, 14 Nov 2015]
+
Papernot et al. [http://arxiv.org/pdf/1511.04508.pdf Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks], 14 Nov 2015
  
Papernot et al. [http://arxiv.org/abs/1511.07528 The Limitations of Deep Learning in Adversarial Settings, 24 Nov 2015]
+
Papernot et al. [http://arxiv.org/abs/1511.07528 The Limitations of Deep Learning in Adversarial Settings], 24 Nov 2015
  
Papernot et al. [http://arxiv.org/pdf/1702.06280.pdf On the (Statistical) Detection of Adversarial Examples, 21 Feb 2017]
+
Papernot et al. [http://arxiv.org/pdf/1702.06280.pdf On the (Statistical) Detection of Adversarial Examples], 21 Feb 2017
  
Papernot et al. [http://arxiv.org/pdf/1602.02697.pdf Practical Black-Box Attacks against Machine Learning, 8 Feb 2016]
+
Papernot et al. [http://arxiv.org/pdf/1602.02697.pdf Practical Black-Box Attacks against Machine Learning], 8 Feb 2016
  
Paudice et al. [http://arxiv.org/pdf/1802.03041.pdf Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection, 8 Feb 2018]
+
Paudice et al. [http://arxiv.org/pdf/1802.03041.pdf Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection], 8 Feb 2018  
  
Raghunathan et al. [http://arxiv.org/pdf/1801.09344.pdf Certified Defenses against Adversarial Examples, 29 Jan 2018]
+
Raghunathan et al. [http://arxiv.org/pdf/1801.09344.pdf Certified Defenses against Adversarial Examples], 29 Jan 2018
  
Rouhani et al. [http://arxiv.org/pdf/1709.02538.pdf CuRTAIL: ChaRacterizing and Thwarting AdversarIal deep Learning, 1 Apr 2018]
+
Rouhani et al. [http://arxiv.org/pdf/1709.02538.pdf CuRTAIL: ChaRacterizing and Thwarting AdversarIal deep Learning], 1 Apr 2018
  
Shen et al. [http://www.comp.nus.edu.sg/~shruti90/papers/auror.pdf AUROR: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems, 5 Dec 2016]
+
Shen et al. [http://www.comp.nus.edu.sg/~shruti90/papers/auror.pdf AUROR: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems], 5 Dec 2016
  
Stokes et al. [http://arxiv.org/pdf/1712.05919.pdf Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models, 16 Dec 2017]
+
Stokes et al. [http://arxiv.org/pdf/1712.05919.pdf Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models] 16 Dec 2017
  
Uesato et al. [http://arxiv.org/pdf/1802.05666.pdf Adversarial Risk and the Dangers of Evaluating Against Weak Attacks, 12 Jun 2018]
+
Uesato et al. [http://arxiv.org/pdf/1802.05666.pdf Adversarial Risk and the Dangers of Evaluating Against Weak Attacks], 12 Jun 2018
  
Uther et al.[http://www.cs.cmu.edu/~mmv/papers/03TR-advRL.pdf Adversarial Reinforcement Learning, Jan 2003]
+
Uther et al.[http://www.cs.cmu.edu/~mmv/papers/03TR-advRL.pdf Adversarial Reinforcement Learning], Jan 2003
  
Wang et al. [http://arxiv.org/pdf/1610.01239.pdf Adversary Resistant Deep Neural Networks with an Application to Malware Detection, 27 Apr 2017]
+
Wang et al. [http://arxiv.org/pdf/1610.01239.pdf Adversary Resistant Deep Neural Networks with an Application to Malware Detection], 27 Apr 2017
  
Xiao et al. [http://pdfs.semanticscholar.org/6adb/6154e091e6448d63327eadb6159746a2710d.pdf Adversarial and Secure Machine Learning, 27 Oct 2016]
+
Xiao et al. [http://pdfs.semanticscholar.org/6adb/6154e091e6448d63327eadb6159746a2710d.pdf Adversarial and Secure Machine Learning], 27 Oct 2016
  
Xu et al. [http://evademl.org/docs/evademl.pdf Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers, Feb 2016]
+
Xu et al. [http://evademl.org/docs/evademl.pdf Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers], Feb 2016
  
Xu et al. [http://arxiv.org/pdf/1704.01155.pdf Feature Squeezing; Detecting Adversarial Examples in Deep Neural Networks, 5 Dec 2017]
+
Xu et al. [http://arxiv.org/pdf/1704.01155.pdf Feature Squeezing; Detecting Adversarial Examples in Deep Neural Networks], 5 Dec 2017
  
Yuan et al. [http://arxiv.org/pdf/1712.07107.pdf Adversarial Examples. Attacks and Defenses for Deep Learning, 5 Jan 2018]
+
Yuan et al. [http://arxiv.org/pdf/1712.07107.pdf Adversarial Examples. Attacks and Defenses for Deep Learning], 5 2018]
  
 
== Weaponizing Machine Learning ==  
 
== Weaponizing Machine Learning ==  

Revision as of 12:24, 27 June 2018

Youtube search...

______________________________________________________


Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. Myth: An attacker must have access to the model to generate adversarial examples. Fact: Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to. -Deep Learning Adversarial Examples – Clarifying Misconceptions | Goodfellow et al.

References

Abadi, M. Chu , A. Goodfellow, I. McMahan, H. Mironov, I. Talwar, K. and Zhang, L. Deep Learning with Differential Privacy, 24 Oct 2016

Abhijith Introduction to Artificial intelligence for security professionals, 12 Aug 2017

Abramson, Myriam Toward Adversarial Online Learning and the Science of Deceptive Machines, 13 Sep 2017

Akhtar et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey, 26 Feb 2018

Al-Dujaili et al. Adversarial Deep Learning for Robust Detection of Binary Encoded Malware, 25 Mar 2018

Anderson et al. Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning, 26 Jan 2018

Barreno et al. Can Machine Learning Be Secure?, 21 Mar 2016

Biggio et al. Poisoning Attacks against Support Vector Machines, 25 Mar 2013

Biggio et al. Support Vector Machines Under Adversarial Label Noise, 2011

Brundage et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, Feb 2018

Bulo et al. Randomized Prediction Games for Adversarial Machine Learning, 11 Nov 2017

Carlini et al. Adversarial Examples: Targeted Attacks on Speech-to-Text, 5 Jan 2018

Chen et al. Automated Poisoning Attacks and Defenses in Malware Detection Systems: An Adversarial Machine Learning Approach, 31 Oct 2017

Chen et al. EAD. Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples, 10 Feb 2018

Chen et al. Hardening Malware Detection Systems Against Cyber Maneuvers. An Adversarial Machine Learning Approach, 13 Oct 2017

Chen et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning, 15 Dec 2017

Demontis et al. Infinity-Norm Support Vector Machines Against Adversarial Label Contamination, 2017

Evtimov, I., Eykholt, K., Fernandes, E., Kohno, T., Li, B., Prakash, A., Rahmati, A., and Song, D. Robust Physical-World Attacks on Deep Learning Visual Classification, 2017

Goodfellow et al. Adversarial examples in the physical world, 11 Feb 2017

Goodfellow et al. Adversarial Examples that Fool both Human and Computer Vision, 22 May 2018

Goodfellow et al. Attacking Machine Learning with Adversarial Examples, 24 Feb 2017

Goodfellow et al. Explaining and Harnessing Adversarial Examples, 20 Mar 2015

Goodfellow et al. Intriguing properties of neural networks, 19 Feb 2014

Goodfellow et al. The Space of Transferable Adversarial Examples, 23 May 2017

Goodfellow et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples, 24 May 2016

Grosse et al. Adversarial Examples for Malware Detection, 12 Aug 2017

Grosse et al. Adversarial Perturbations Against Deep Neural Networks for Malware Classification, 16 Jun 2016

Hosseini et al. Blocking Transferability of Adversarial Examples in Black-Box Learning Systems, 13 Mar 2017

Hu et al. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN, 20 Feb 2017

Huang et al. Adversarial Machine Learning, Oct 2011

Jin et al. Robust Convolutional Neural Networks under Adversarial Noise, 25 Feb 2016

Kantarcioglu et al. Adversarial Data Mining for Cyber Security, 28 Oct 2016

Kantchelian et al. Evasion and Hardening of Tree Ensemble Classifiers, 27 May 2016

Kantchelian Taming Evasions in Machine Learning Based Detection, 12 Aug 2016

Keshet et al. Adversarial Examples on Discrete Sequences for Beating Whole-Binary Malware Detection, 13 Feb 2018

Kolosnjaji et al. Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables, 12 Mar 2018

Kreuk et al. Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples, 13 May 2018

Laskov et al. Machine Learning in Adversarial Environments, 28 Jun 2010

Luo et al. Towards Imperceptible and Robust Adversarial Example Attacks against Neural, 15 Jan 2018

Madry et al. Towards Deep Learning Models Resistant to Adversarial Attacks, 19 Jun 2017

Miller et al. Adversarial Learning. A Critical Review and Active Learning Study, 27 May 2017

Muñoz-González et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization, 29 Aug 2017

Naveiro et al. Adversarial classification: An adversarial risk analysis approach, 21 Feb 2018

Nguyen et al. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, 2 Apr 2015

Norton et al. Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning, 1 Aug 2017

Ororbia II et al. Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization, 29 Jul 2016

Papernot et al. Adversarial Examples in Machine Learning, 1 Feb 2017

Papernot et al. Adversarial Perturbations Against Deep Neural Networks, 16 Jun 2016

Papernot et al. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks, 14 Nov 2015

Papernot et al. The Limitations of Deep Learning in Adversarial Settings, 24 Nov 2015

Papernot et al. On the (Statistical) Detection of Adversarial Examples, 21 Feb 2017

Papernot et al. Practical Black-Box Attacks against Machine Learning, 8 Feb 2016

Paudice et al. Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection, 8 Feb 2018

Raghunathan et al. Certified Defenses against Adversarial Examples, 29 Jan 2018

Rouhani et al. CuRTAIL: ChaRacterizing and Thwarting AdversarIal deep Learning, 1 Apr 2018

Shen et al. AUROR: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems, 5 Dec 2016

Stokes et al. Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models 16 Dec 2017

Uesato et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks, 12 Jun 2018

Uther et al.Adversarial Reinforcement Learning, Jan 2003

Wang et al. Adversary Resistant Deep Neural Networks with an Application to Malware Detection, 27 Apr 2017

Xiao et al. Adversarial and Secure Machine Learning, 27 Oct 2016

Xu et al. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers, Feb 2016

Xu et al. Feature Squeezing; Detecting Adversarial Examples in Deep Neural Networks, 5 Dec 2017

Yuan et al. Adversarial Examples. Attacks and Defenses for Deep Learning, 5 2018]

Weaponizing Machine Learning

Youtube search...