Offense - Adversarial Threats/Attacks

From
Revision as of 20:08, 11 June 2018 by BPeat (talk | contribs)
Jump to: navigation, search

Youtube search...

______________________________________________________

Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines.

Szegedy et al. Intriguing properties of neural networks

Papernot et al. The Limitations of Deep Learning in Adversarial Settings

Papernot et al. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples

Goodfellow et al. Explaining and Harnessing Adversarial Examples

Papernot et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

Grosse et al. Adversarial Perturbations Against Deep Neural Networks for Malware Classification

Nguyen et al. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images

Xu et al. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers

Kantchelian et al. Evasion and Hardening of Tree Ensemble Classifiers

Biggio et al. Support Vector Machines Under Adversarial Label Noise

Biggio et al. Poisoning Attacks against Support Vector Machines

Papernot et al. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks

Ororbia II et al. Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization

Jin et al. Robust Convolutional Neural Networks under Adversarial Noise

Goodfellow et al. Learning Adversarial Examples – Clarifying Misconceptions