Difference between revisions of "Offense - Adversarial Threats/Attacks"

From
Jump to: navigation, search
Line 10: Line 10:
 
* [[Cybersecurity]]
 
* [[Cybersecurity]]
 
** [[Cybersecurity References]]
 
** [[Cybersecurity References]]
 +
** [[Government Services]]
 
** [[Defense]]
 
** [[Defense]]
* [[Government Services]]
 
 
* [[Capabilities]]  
 
* [[Capabilities]]  
 
* [[Boolean Satisfiability (SAT) Problem/Satisfiability Modulo Theories (SMT) Solvers]]
 
* [[Boolean Satisfiability (SAT) Problem/Satisfiability Modulo Theories (SMT) Solvers]]

Revision as of 08:34, 15 July 2020

Youtube search... ...Google search

______________________________________________________

Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. Myth: An attacker must have access to the model to generate adversarial examples. Fact: Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to. -Deep Learning Adversarial Examples – Clarifying Misconceptions | Goodfellow et al.

endgame-ai-agent.jpg