Offense - Adversarial Threats/Attacks
Youtube search... ...Google search
- Cybersecurity ... OSINT ... Frameworks ... References ... Offense ... NIST ... DHS ... Screening ... Law Enforcement ... Government ... Defense ... Lifecycle Integration ... Products ... Evaluating
- Boolean Satisfiability (SAT) Problem/Satisfiability Modulo Theories (SMT) Solvers
- Defenses Against Adversarial Attacks
______________________________________________________
- Cleverhans - library for benchmarking the vulnerability of machine learning models to adversarial examples blog
- Adversarial Machine Learning for Anti-Malware Software | nababora @ GitHub
- Deep-pwning/Metasploit | Clarence Chio
- EvadeML.org | University of Virginia
- AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack
- Pattern Recognition and Applications Lab (PRA Lab)
- This Invisible Sweater Developed by the University of Maryland Tricks Artificial Intelligence (AI) Cameras and Stops them from Recognizing People | Ashish Kumar - MarketPost
Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. Myth: An attacker must have access to the model to generate adversarial examples. Fact: Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to. -Deep Learning Adversarial Examples – Clarifying Misconceptions | Goodfellow et al.
Data Poisoning
Youtube search... ...Google search
Data poisoning or model poisoning attacks involve polluting a machine learning model's training data. Data poisoning is considered an integrity attack because tampering with the training data impacts the model's ability to output correct predictions. Other types of attacks can be similarly classified based on their impact:
- Confidentiality, where the attackers can infer potentially confidential information about the training data by feeding inputs to the model
- Availability, where the attackers disguise their inputs to trick the model in order to evade correct classification
- Replication, where attackers can reverse-engineer the model in order to replicate it and analyze it locally to prepare attacks or exploit it for their own financial gain
The difference between an attack that is meant to evade a model's prediction or classification and a poisoning attack is persistence: with poisoning, the attacker's goal is to get their inputs to be accepted as training data. The length of the attack also differs because it depends on the model's training cycle; it might take weeks for the attacker to achieve their poisoning goal.
Data poisoning can be achieved either in a blackbox scenario against classifiers that rely on user feedback to update their learning or in a whitebox scenario where the attacker gains access to the model and its private training data, possibly somewhere in the supply chain if the training data is collected from multiple sources. [How data poisoning attacks corrupt machine learning models. How data poisoning attacks corrupt machine learning models | Lucian Constantin - CSO
Side Channel Attack (SCA)
- Quantum Cryptography
- Post-Quantum Cryptography (PQC)
- Side-Channel Attack protected Implementation of Kyber, a Lattice-based KEM scheme which is part of the NIST standardization | Prasanna-Ravi ... employing a cheap and low-entropy masking which involves multiplicative masking with powers of the twiddle factors whose products are already pre-computed.
- A Review and Comparison of AI-enhanced Side Channel Analysis | M. Panoff, H. Yu, H. Shan, & Y. Jin - Association of COmputing Machinery (ACM)
- Overview of Side Channel Cipher Analysis Basedon Deep Learning | S. Song, K. Chen1and, & Y. Zhang - Institute of Physics
- PQC-SEP: Power Side-channel Evaluation Platform for Post-Quantum Cryptography Algorithms | J. Park, N. Anandakumar, D. Saha, D. Mehta, N. Pundir, F. Rahman, F. Farahmandi, & M. Tehranipoor - University of Florida
- Side-Channel Attacks on Lattice-Based Cryptography and Multi-Processor Systems | Peter Peß
- Mitigating Side-Channel Attacks In Post Quantum Cryptography (PQC) With Secure-IC Solutions | Secure-IC
- How Hackers Can Steal Secrets from Reflections | W. Wayt Gibbs - Scientific American ... Information thieves can now go around encryption, networks and the operating system
In computer security, a side-channel attack is any attack based on extra information that can be gathered because of the fundamental way a computer protocol or algorithm is implemented, rather than flaws in the design of the protocol or algorithm itself (e.g. flaws found in a cryptanalysis of a cryptographic algorithm) or minor, but potentially devastating, mistakes or oversights in the implementation. (Cryptanalysis also includes searching for side-channel attacks.) Timing information, power consumption, electromagnetic leaks, and sound are examples of extra information which could be exploited to facilitate side-channel attacks. - Wikipedia