Difference between revisions of "Offense - Adversarial Threats/Attacks"

From
Jump to: navigation, search
Line 23: Line 23:
  
 
<youtube>wbRx18VZlYA</youtube>
 
<youtube>wbRx18VZlYA</youtube>
 +
<youtube>JAGDpJFFM2A</youtube>
 
<youtube>NrGMvTZxAwU</youtube>
 
<youtube>NrGMvTZxAwU</youtube>
 
<youtube>4rFOkpI0Lcg</youtube>
 
<youtube>4rFOkpI0Lcg</youtube>
<youtube>JAGDpJFFM2A</youtube>
 
 
<youtube>j9FLOinaG94</youtube>
 
<youtube>j9FLOinaG94</youtube>
 
<youtube>M2IebCN9Ht4</youtube>
 
<youtube>M2IebCN9Ht4</youtube>

Revision as of 22:05, 6 July 2018

Youtube search...

______________________________________________________


Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. Myth: An attacker must have access to the model to generate adversarial examples. Fact: Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to. -Deep Learning Adversarial Examples – Clarifying Misconceptions | Goodfellow et al.