Difference between revisions of "Boolean Satisfiability (SAT) Problem/Satisfiability Modulo Theories (SMT) Solvers"
(Created page with "[http://www.youtube.com/results?search_query=~SAT+SMT+Satisfiability+Modulo+Theories+Z3+Reluplex+Deep+Learning+Artificial+Intelligence Youtube search...] * [http://rise4fun.c...") |
|||
Line 1: | Line 1: | ||
[http://www.youtube.com/results?search_query=~SAT+SMT+Satisfiability+Modulo+Theories+Z3+Reluplex+Deep+Learning+Artificial+Intelligence Youtube search...] | [http://www.youtube.com/results?search_query=~SAT+SMT+Satisfiability+Modulo+Theories+Z3+Reluplex+Deep+Learning+Artificial+Intelligence Youtube search...] | ||
+ | * [[Offense - Adversarial Threats/Attacks]] | ||
* [http://rise4fun.com/ Rise4Fun - automata concurrency design encoders infrastructure languages security synthesis testing verification language] | * [http://rise4fun.com/ Rise4Fun - automata concurrency design encoders infrastructure languages security synthesis testing verification language] | ||
* [http://ijcai13.org/files/tutorial_slides/tb1.pdf SAT in AI: high performance search methods with applications] | * [http://ijcai13.org/files/tutorial_slides/tb1.pdf SAT in AI: high performance search methods with applications] |
Revision as of 21:24, 5 July 2018
- Offense - Adversarial Threats/Attacks
- Rise4Fun - automata concurrency design encoders infrastructure languages security synthesis testing verification language
- SAT in AI: high performance search methods with applications
- Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
In what seems to be an endless back-and-forth between new adversarial attacks and new defenses against those attacks, we would like a means of formally verifying the robustness of machine learning algorithms to adversarial attacks. In the privacy domain, there is the idea of a differential privacy budget, which quantifies privacy over all possible attacks. In the following three papers, we see attempts at deriving an equivalent benchmark for security, one that will allow the evaluation of defenses against all possible attacks instead of just a specific one. Class 6: Measuring Robustness of ML Models
- Nicholas Carlini, Guy Katz, Clark Barrett, David L. Dill. Provably Minimally-Distorted Adversarial Examples 20 Feb 2018
- Guy Katz, Clark Barrett, David Dill, Kyle Julian, Mykel Kochenderfer. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks 19 May 2017
- Tsui-Wei Weng, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, Luca Daniel. Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach 31 Jan 2018