Difference between revisions of "Boolean Satisfiability (SAT) Problem/Satisfiability Modulo Theories (SMT) Solvers"

From
Jump to: navigation, search
(Created page with "[http://www.youtube.com/results?search_query=~SAT+SMT+Satisfiability+Modulo+Theories+Z3+Reluplex+Deep+Learning+Artificial+Intelligence Youtube search...] * [http://rise4fun.c...")
 
m (Text replacement - "http://" to "https://")
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
[http://www.youtube.com/results?search_query=~SAT+SMT+Satisfiability+Modulo+Theories+Z3+Reluplex+Deep+Learning+Artificial+Intelligence Youtube search...]
+
{{#seo:
 +
|title=PRIMO.ai
 +
|titlemode=append
 +
|keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, Tensorflow, Google, Nvidia, Microsoft, Azure, Amazon, AWS
 +
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools
 +
}}
 +
[https://www.youtube.com/results?search_query=~SAT+SMT+Satisfiability+Modulo+Theories+Z3+Reluplex+Deep+Learning+Artificial+Intelligence Youtube search...]
 +
[https://www.google.com/search?q=SAT+SMT+Satisfiability+Modulo+Theories+Z3+Reluplex+deep+machine+learning+ML+artificial+intelligence ...Google search]
  
* [http://rise4fun.com/ Rise4Fun - automata concurrency design encoders infrastructure languages security synthesis testing verification language]
+
* [[Offense - Adversarial Threats/Attacks]]
* [http://ijcai13.org/files/tutorial_slides/tb1.pdf SAT in AI: high performance search methods with applications]
+
* [https://rise4fun.com/ Rise4Fun - automata concurrency design encoders infrastructure languages security synthesis testing verification language]
* [http://stanford.edu/~guyk/pub/CAV2017_R.pdf Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks]
+
* [https://ijcai13.org/files/tutorial_slides/tb1.pdf SAT in AI: high performance search methods with applications]
 +
* [https://stanford.edu/~guyk/pub/CAV2017_R.pdf Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks]
  
  
In what seems to be an endless back-and-forth between new adversarial attacks and new defenses against those attacks, we would like a means of formally verifying the robustness of machine learning algorithms to adversarial attacks. In the privacy domain, there is the idea of a differential privacy budget, which quantifies privacy over all possible attacks. In the following three papers, we see attempts at deriving an equivalent benchmark for security, one that will allow the evaluation of defenses against all possible attacks instead of just a specific one. [http://secml.github.io/class6/ Class 6: Measuring Robustness of ML Models]
+
In what seems to be an endless back-and-forth between new adversarial attacks and new defenses against those attacks, we would like a means of formally verifying the robustness of machine learning algorithms to adversarial attacks. In the [[privacy]] domain, there is the idea of a differential [[privacy]] budget, which quantifies [[privacy]] over all possible attacks. In the following three papers, we see attempts at deriving an equivalent benchmark for security, one that will allow the evaluation of defenses against all possible attacks instead of just a specific one. [https://secml.github.io/class6/ Class 6: Measuring Robustness of ML Models]
  
* Nicholas Carlini, Guy Katz, Clark Barrett, David L. Dill. [http://arxiv.org/pdf/1709.10207.pdf Provably Minimally-Distorted Adversarial Examples] 20 Feb 2018
+
* Nicholas Carlini, Guy Katz, Clark Barrett, David L. Dill. [https://arxiv.org/pdf/1709.10207.pdf Provably Minimally-Distorted Adversarial Examples] 20 Feb 2018
  
* Guy Katz, Clark Barrett, David Dill, Kyle Julian, Mykel Kochenderfer. [http://arxiv.org/pdf/1702.01135.pdf Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks] 19 May 2017
+
* Guy Katz, Clark Barrett, David Dill, Kyle Julian, Mykel Kochenderfer. [https://arxiv.org/pdf/1702.01135.pdf Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks] 19 May 2017
  
* Tsui-Wei Weng, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, Luca Daniel. [http://arxiv.org/pdf/1801.10578.pdf Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach] 31 Jan 2018  
+
* Tsui-Wei Weng, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, Luca Daniel. [https://arxiv.org/pdf/1801.10578.pdf Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach] 31 Jan 2018  
  
 
<youtube>DX3G4IoTNF0</youtube>
 
<youtube>DX3G4IoTNF0</youtube>

Latest revision as of 04:19, 28 March 2023

Youtube search... ...Google search


In what seems to be an endless back-and-forth between new adversarial attacks and new defenses against those attacks, we would like a means of formally verifying the robustness of machine learning algorithms to adversarial attacks. In the privacy domain, there is the idea of a differential privacy budget, which quantifies privacy over all possible attacks. In the following three papers, we see attempts at deriving an equivalent benchmark for security, one that will allow the evaluation of defenses against all possible attacks instead of just a specific one. Class 6: Measuring Robustness of ML Models