Difference between revisions of "Offense - Adversarial Threats/Attacks"
| Line 17: | Line 17: | ||
______________________________________________________ | ______________________________________________________ | ||
| − | + | * [http://www.wired.com/story/technique-uses-ai-fool-other-ais/ This Technique Uses AI to Fool Other AIs | Will Knight] [http://github.com/jind11/TextFooler TextFooler | Di Jin] | |
* [http://www.cleverhans.io/ Cleverhans] - library for benchmarking the vulnerability of machine learning models to adversarial examples blog | * [http://www.cleverhans.io/ Cleverhans] - library for benchmarking the vulnerability of machine learning models to adversarial examples blog | ||
* [http://github.com/nababora/advML Adversarial Machine Learning for Anti-Malware Software | nababora @ GitHub] | * [http://github.com/nababora/advML Adversarial Machine Learning for Anti-Malware Software | nababora @ GitHub] | ||
Revision as of 21:54, 23 February 2020
Youtube search... ...Google search
- Cybersecurity
- Government Services
- Capabilities
- Boolean Satisfiability (SAT) Problem/Satisfiability Modulo Theories (SMT) Solvers
- Defenses Against Adversarial Attacks
______________________________________________________
- This Technique Uses AI to Fool Other AIs | Will Knight TextFooler | Di Jin
- Cleverhans - library for benchmarking the vulnerability of machine learning models to adversarial examples blog
- Adversarial Machine Learning for Anti-Malware Software | nababora @ GitHub
- Deep-pwning/Metasploit | Clarence Chio
- EvadeML.org | University of Virginia
- AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack
- Pattern Recognition and Applications Lab (PRA Lab)
Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. Myth: An attacker must have access to the model to generate adversarial examples. Fact: Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to. -Deep Learning Adversarial Examples – Clarifying Misconceptions | Goodfellow et al.