Difference between revisions of "Offense - Adversarial Threats/Attacks"

From
Jump to: navigation, search
Line 36: Line 36:
  
 
<youtube>wbRx18VZlYA</youtube>
 
<youtube>wbRx18VZlYA</youtube>
 +
 +
== Boolean Satisfiability (SAT) Problem/Satisfiability Modulo Theories (SMT): Z3 and Reluplex Solvers ==
 +
[http://www.youtube.com/results?search_query=~SAT+SMT+Satisfiability+Modulo+Theories+Z3+Reluplex+Deep+Learning+Artificial+Intelligence Youtube search...]
 +
 +
* [http://rise4fun.com/ Rise4Fun - automata concurrency design encoders infrastructure languages security synthesis testing verification language]
 +
* [http://ijcai13.org/files/tutorial_slides/tb1.pdf SAT in AI: high performance search methods with applications]
 +
 +
<youtube>DX3G4IoTNF0</youtube>
 +
<youtube>iljZWZzFu7k</youtube>
 +
<youtube>d76e4hV1iJY</youtube>
 +
<youtube>ruNFcH-KibY</youtube>
 +
<youtube>KiKS_zaPb64</youtube>
 +
<youtube>HqlMSnY0b2w</youtube>

Revision as of 16:07, 5 July 2018

Youtube search...

______________________________________________________


Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. Myth: An attacker must have access to the model to generate adversarial examples. Fact: Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to. -Deep Learning Adversarial Examples – Clarifying Misconceptions | Goodfellow et al.

Weaponizing Machine Learning

Youtube search...

Boolean Satisfiability (SAT) Problem/Satisfiability Modulo Theories (SMT): Z3 and Reluplex Solvers

Youtube search...