Difference between revisions of "Offense - Adversarial Threats/Attacks"

From
Jump to: navigation, search
m
 
(72 intermediate revisions by the same user not shown)
Line 1: Line 1:
[http://www.youtube.com/results?search_query=Adversarial+threat+attack+Deep+Learning+Artificial+Intelligence Youtube search...]
+
{{#seo:
 +
|title=PRIMO.ai
 +
|titlemode=append
 +
|keywords=ChatGPT, artificial, intelligence, machine, learning, GPT-4, GPT-5, NLP, NLG, NLC, NLU, models, data, singularity, moonshot, Sentience, AGI, Emergence, Moonshot, Explainable, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Hugging Face, OpenAI, Tensorflow, OpenAI, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Meta, LLM, metaverse, assistants, agents, digital twin, IoT, Transhumanism, Immersive Reality, Generative AI, Conversational AI, Perplexity, Bing, You, Bard, Ernie, prompt Engineering LangChain, Video/Image, Vision, End-to-End Speech, Synthesize Speech, Speech Recognition, Stanford, MIT |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools 
  
* [[Cybersecurity]]
+
<!-- Google tag (gtag.js) -->
* [[Defense - Anomaly Detection]]
+
<script async src="https://www.googletagmanager.com/gtag/js?id=G-4GCWLBVJ7T"></script>
* [[Capabilities]]  
+
<script>
 +
  window.dataLayer = window.dataLayer || [];
 +
  function gtag(){dataLayer.push(arguments);}
 +
  gtag('js', new Date());
 +
 
 +
  gtag('config', 'G-4GCWLBVJ7T');
 +
</script>
 +
}}
 +
[https://www.youtube.com/results?search_query=Adversarial+threat+attack+defcon+Deep+Learning+Artificial+Intelligence Youtube search...]
 +
[https://www.google.com/search?q=adversarial+threat+attack+defcon+deep+machine+learning+ML+artificial+intelligence ...Google search]
 +
 
 +
* [[Cybersecurity]] ... [[Open-Source Intelligence - OSINT |OSINT]] ... [[Cybersecurity Frameworks, Architectures & Roadmaps | Frameworks]] ... [[Cybersecurity References|References]] ... [[Offense - Adversarial Threats/Attacks| Offense]] ... [[National Institute of Standards and Technology (NIST)|NIST]] ... [[U.S. Department of Homeland Security (DHS)| DHS]] ... [[Screening; Passenger, Luggage, & Cargo|Screening]] ... [[Law Enforcement]] ... [[Government Services|Government]] ... [[Defense]] ... [[Joint Capabilities Integration and Development System (JCIDS)#Cybersecurity & Acquisition Lifecycle Integration| Lifecycle Integration]] ... [[Cybersecurity Companies/Products|Products]] ... [[Cybersecurity: Evaluating & Selling|Evaluating]]
 +
* [[Boolean Satisfiability (SAT) Problem/Satisfiability Modulo Theories (SMT) Solvers]]
 +
* [[Defenses Against Adversarial Attacks]]
 
______________________________________________________
 
______________________________________________________
  
  
* [http://www.cleverhans.io/ Cleverhans] - library for benchmarking the vulnerability of machine learning models to adversarial examples blog
+
* [https://www.cleverhans.io/ Cleverhans] - library for benchmarking the vulnerability of machine learning models to adversarial examples blog
* [http://github.com/nababora/advML Adversarial Machine Learning for Anti-Malware Software | nababora @ GitHub]
+
* [https://github.com/nababora/advML Adversarial Machine Learning for Anti-Malware Software | nababora @ GitHub]
* [http://github.com/cchio/deep-pwning Deep-pwning/Metasploit | Clarence Chio]
+
* [https://github.com/cchio/deep-pwning Deep-pwning/Metasploit | Clarence Chio]
* [http://evademl.org/ EvadeML.org | University of Virginia]
+
* [https://evademl.org/ EvadeML.org | University of Virginia]
* [http://arxiv.org/pdf/1611.04786.pdf AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack]
+
* [https://arxiv.org/pdf/1611.04786.pdf AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack]
* [http://pralab.diee.unica.it/en Pattern Recognition and Applications Lab (PRA Lab)]
+
* [https://pralab.diee.unica.it/en Pattern Recognition and Applications Lab (PRA Lab)]
** [http://pralab.diee.unica.it/en/AdversariaLib AdversariaLib | PRA Lab]
+
** [https://pralab.diee.unica.it/en/AdversariaLib AdversariaLib | PRA Lab]
** [http://pralab.diee.unica.it/en/ALFASVMLib adversarial label flip attacks against Support Vector Machines (ALFASVMLib) | PRA Lab]
+
** [https://pralab.diee.unica.it/en/ALFASVMLib Adversarial label flip attacks against Support Vector Machines (ALFASVMLib) | PRA Lab]
 +
* [https://www.marktechpost.com/2022/11/25/this-invisible-sweater-developed-by-the-university-of-maryland-tricks-artificial-intelligence-ai-cameras-and-stops-them-from-recognizing-people/ This Invisible Sweater Developed by the University of Maryland Tricks Artificial Intelligence (AI) Cameras and Stops them from Recognizing People | Ashish Kumar - MarketPost]
  
Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. Myth: An attacker must have access to the model to generate adversarial examples.  Fact: Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to. -[http://www.kdnuggets.com/2015/07/deep-learning-adversarial-examples-misconceptions.html Deep Learning Adversarial Examples – Clarifying Misconceptions | Goodfellow et al. ]
+
Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. Myth: An attacker must have access to the model to generate adversarial examples.  Fact: Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to. -[https://www.kdnuggets.com/2015/07/deep-learning-adversarial-examples-misconceptions.html Deep Learning Adversarial Examples – Clarifying Misconceptions | Goodfellow et al. ]
  
 +
 +
 +
<youtube>zLZR7lxl5bc</youtube>
 +
<youtube>wbRx18VZlYA</youtube>
 +
<youtube>JAGDpJFFM2A</youtube>
 +
<youtube>NrGMvTZxAwU</youtube>
 
<youtube>4rFOkpI0Lcg</youtube>
 
<youtube>4rFOkpI0Lcg</youtube>
<youtube>JAGDpJFFM2A</youtube>
 
 
<youtube>j9FLOinaG94</youtube>
 
<youtube>j9FLOinaG94</youtube>
 
<youtube>M2IebCN9Ht4</youtube>
 
<youtube>M2IebCN9Ht4</youtube>
Line 26: Line 48:
 
<youtube>sFhD6ABghf8</youtube>
 
<youtube>sFhD6ABghf8</youtube>
 
<youtube>dfgOar_jaG0</youtube>
 
<youtube>dfgOar_jaG0</youtube>
 +
<youtube>hmUPhRtS_pY</youtube>
 +
<youtube>cjo_u_yT2wQ</youtube>
 +
<youtube>KGKlAQ8tH5o</youtube>
  
== Sources ==
+
= Data Poisoning =
Papernot et al. [http://arxiv.org/abs/1605.07277 Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples]
+
[https://www.youtube.com/results?search_query=Poisoning+Label+Flipping+Adversarial+threat+attack+defcon+Deep+Learning+Artificial+Intelligence Youtube search...]
 
+
[https://www.google.com/search?q=Poisoning+Label+Flippingadversarial+threat+attack+defcon+deep+machine+learning+ML+artificial+intelligence ...Google search]
Papernot et al. [http://arxiv.org/abs/1511.07528 The Limitations of Deep Learning in Adversarial Settings]
 
  
Papernot et al. [http://arxiv.org/abs/1602.02697 Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples]
+
* [https://arxiv.org/pdf/2207.01982.pdf Defending against the Label-flipping Attack in Federated Learning | Najeeb Moharram Jebreel, Josep Domingo-Ferrer, David Sánchez and Alberto Blanco-Justicia]
  
Papernot et al. [http://arxiv.org/abs/1511.04508 Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks]
+
Data poisoning or model poisoning attacks involve polluting a machine learning model's training data. Data poisoning is considered an integrity attack because tampering with the training data impacts the model's ability to output correct predictions. Other types of attacks can be similarly classified based on their impact:
  
Papernot et al. [https://www.usenix.org/sites/default/files/conference/protected-files/enigma17_slides_papernot.pdf  Adversarial Examples in Machine Learning]
+
* Confidentiality, where the attackers can infer potentially confidential information about the training data by feeding inputs to the model
 +
* Availability, where the attackers disguise their inputs to trick the model in order to evade correct classification
 +
* Replication, where attackers can reverse-engineer the model in order to replicate it and analyze it locally to prepare attacks or exploit it for their own financial gain
 +
The difference between an attack that is meant to evade a model's prediction or classification and a poisoning attack is persistence: with poisoning, the attacker's goal is to get their inputs to be accepted as training data. The length of the attack also differs because it depends on the model's training cycle; it might take weeks for the attacker to achieve their poisoning goal.
  
Goodfellow et al. [http://arxiv.org/abs/1412.6572 Explaining and Harnessing Adversarial Examples]
+
Data poisoning can be achieved either in a blackbox scenario against classifiers that rely on user feedback to update their learning or in a whitebox scenario where the attacker gains access to the model and its private training data, possibly somewhere in the supply chain if the training data is collected from multiple sources.  [How data poisoning attacks corrupt machine learning models. [https://www.csoonline.com/article/3613932/how-data-poisoning-attacks-corrupt-machine-learning-models.html How data poisoning attacks corrupt machine learning models | Lucian Constantin - CSO]
  
Biggio et al. [http://proceedings.mlr.press/v20/biggio11/biggio11.pdf Support Vector Machines Under Adversarial Label Noise]
 
  
Biggio et al. [http://arxiv.org/abs/1206.6389 Poisoning Attacks against Support Vector Machines]
+
<youtube>OB54mKmUjmI</youtube>
 +
<youtube>WO3zRP9c4p0</youtube>
 +
<youtube>ujL8j4QS0rA</youtube>
 +
<youtube>ZZky3uf-9IM</youtube>
  
  
 +
= <span id="Side Channel Attack (SCA)"></span>Side Channel Attack (SCA) =
 +
* [[Quantum#Cryptography | Quantum Cryptography]]
 +
* [[Cybersecurity: National Institute of Standards and Technology (NIST) & U.S. Department of Homeland Security (DHS)#Post-Quantum Cryptography (PQC) | Post-Quantum Cryptography (PQC)]]
 +
* [https://github.com/PRASANNA-RAVI/SCA_protected_Kyber Side-Channel Attack protected Implementation of Kyber, a Lattice-based KEM scheme which is part of the NIST standardization | Prasanna-Ravi] ... employing a cheap and low-entropy masking which involves multiplicative masking with powers of the twiddle factors whose products are already pre-computed.
 +
* [https://dl.acm.org/doi/10.1145/3517810 A Review and Comparison of AI-enhanced Side Channel Analysis | M. Panoff, H. Yu, H. Shan, & Y. Jin - Association of COmputing Machinery (ACM)]
 +
* [https://iopscience.iop.org/article/10.1088/1742-6596/1213/2/022013/pdf Overview of Side Channel Cipher Analysis Basedon Deep Learning | S. Song, K. Chen1and, &  Y. Zhang - Institute of Physics]
 +
* [https://eprint.iacr.org/2022/527.pdf PQC-SEP: Power Side-channel Evaluation Platform for Post-Quantum Cryptography Algorithms | J. Park, N. Anandakumar, D. Saha, D. Mehta, N. Pundir, F. Rahman, F. Farahmandi, & M. Tehranipoor - University of Florida]
 +
* [https://diglib.tugraz.at/download.php?id=5d7ac539b31eb&location=browse Side-Channel Attacks on Lattice-Based Cryptography and Multi-Processor Systems | Peter Peß]
 +
* [https://www.design-reuse.com/industryexpertblogs/53785/mitigating-side-channel-attacks-in-post-quantum-cryptography-pqc.html Mitigating Side-Channel Attacks In Post Quantum Cryptography (PQC) With Secure-IC Solutions | Secure-IC]
 +
* [https://www.scientificamerican.com/article/hackers-can-steal-from-reflections/ How Hackers Can Steal Secrets from Reflections | W. Wayt Gibbs  - Scientific American] ... Information thieves can now go around encryption, networks and the operating system
  
Szegedy et al. [http://arxiv.org/abs/1312.6199 Intriguing properties of neural networks]
 
  
Grosse et al. [http://arxiv.org/abs/1606.04435 Adversarial Perturbations Against Deep Neural Networks for Malware Classification]
+
In computer security, a side-channel attack is any attack based on extra information that can be gathered because of the fundamental way a computer protocol or algorithm is implemented, rather than flaws in the design of the protocol or algorithm itself (e.g. flaws found in a cryptanalysis of a cryptographic algorithm) or minor, but potentially devastating, mistakes or oversights in the implementation. (Cryptanalysis also includes searching for side-channel attacks.) Timing information, power consumption, electromagnetic leaks, and sound are examples of extra information which could be exploited to facilitate side-channel attacks. - [https://en.wikipedia.org/wiki/Side-channel_attack Wikipedia]
  
Nguyen et al. [http://arxiv.org/abs/1412.1897 Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images]
 
  
Xu et al. [http://www.cs.virginia.edu/~evans/pubs/ndss2016/ Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers]
+
<youtube>8Cb-YefLyhM</youtube>
 
+
<youtube>rb1wzJeGbD8</youtube>
Kantchelian et al. [http://arxiv.org/abs/1509.07892 Evasion and Hardening of Tree Ensemble Classifiers]
 
 
 
Ororbia II et al. [http://arxiv.org/abs/1601.07213 Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization]
 
 
 
Jin et al. [http://arxiv.org/abs/1511.06306 Robust Convolutional Neural Networks under Adversarial Noise]
 
 
 
Barreno et al. [http://bnrg.cs.berkeley.edu/~adj/publications/paper-files/asiaccs06.pdf Can Machine Learning Be Secure?]
 
 
 
J.D. Tygar, Ling Huang et al. [http://people.eecs.berkeley.edu/~tygar/papers/SML2/Adversarial_AISEC.pdf Adversarial Machine Learning]
 
 
 
Xiao et al. [http://pdfs.semanticscholar.org/6adb/6154e091e6448d63327eadb6159746a2710d.pdf Adversarial and Secure Machine Learning]
 
 
 
Uther et al.[http://www.cs.cmu.edu/~mmv/papers/03TR-advRL.pdf Adversarial Reinforcement Learning]
 
 
 
Kurakin et al. [http://openreview.net/pdf?id=S1OufnIlx Adversarial examples in the physical world]
 
 
 
Laskov et al. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.375.4564&rep=rep1&type=pdf Machine Learning in Adversarial Environments]
 
 
 
 
 
== W/ Papers ==
 
 
 
Eykholt et al. [http://arxiv.org/pdf/1707.08945.pdf Robust Physical-World Attacks on Deep Learning Visual Classification]
 
 
 
Naveiro et al. [http://arxiv.org/pdf/1802.07513.pdf Adversarial classification: An adversarial risk analysis approach, 21 Feb 2018]
 
 
 
Kantarcioglu et al. [http://www.utdallas.edu/~muratk/CCS-tutorial.pdf Adversarial Data Mining for Cyber Security, 28 Oct 2016]
 
 
 
Al-Dujaili et al. [http://arxiv.org/pdf/1801.02950.pdf Adversarial Deep Learning for Robust Detection of Binary Encoded Malware, 25 Mar 2018]
 
 
 
Grosse et al. [http://www.patrickmcdaniel.org/pubs/esorics17.pdf Adversarial Examples for Malware Detection, 12 Aug 2017]
 
 
 
Kreuk et al. [http://arxiv.org/pdf/1802.04528.pdf Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples, 13 May 2018]
 
 
 
Keshet et al. [http://www.groundai.com/project/adversarial-examples-on-discrete-sequences-for-beating-whole-binary-malware-detection/ Adversarial Examples on Discrete Sequences for Beating Whole-Binary Malware Detection, 13 Feb 2018]
 
 
 
Goodfellow et al.[http://arxiv.org/pdf/1802.08195.pdf Adversarial Examples that Fool both Human and Computer Vision, 22 May 2018]
 
 
 
Yuan et al. [http://arxiv.org/pdf/1712.07107.pdf Adversarial Examples. Attacks and Defenses for Deep Learning, 5 Jan 2018]
 
 
 
Miller et al. [http://arxiv.org/pdf/1705.09823.pdf Adversarial Learning. A Critical Review and Active Learning Study, 27 May 2017]
 
 
 
Kolosnjaji et al. [http://arxiv.org/pdf/1803.04173.pdf Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables, 12 Mar 2018]
 
 
 
Chen et al. [http://arxiv.org/pdf/1706.04146.pdf Automated Poisoning Attacks and Defenses in Malware Detection Systems: An Adversarial Machine Learning Approach, 31 Oct 2017]
 
 
 
Papernot et al. [http://arxiv.org/pdf/1606.04435.pdf Adversarial Perturbations Against Deep Neural Networks, 16 Jun 2016]
 
 
 
Uesato et al. [http://arxiv.org/pdf/1802.05666.pdf Adversarial Risk and the Dangers of Evaluating Against Weak Attacks, 12 Jun 2018]
 
 
 
Norton et al. [http://arxiv.org/pdf/1708.00807.pdf Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning, 1 Aug 2017]
 
 
 
Wang et al. [http://arxiv.org/pdf/1610.01239.pdf Adversary Resistant Deep Neural Networks with an Application to Malware Detection, 27 Apr 2017]
 
 
 
Stokes et al. [http://arxiv.org/pdf/1712.05919.pdf Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models, 16 Dec 2017]
 
 
 
Goodfellow et al. [http://blog.openai.com/adversarial-example-research/ Attacking Machine Learning with Adversarial Examples, 24 Feb 2017]
 
 
 
Carlini et al. [http://arxiv.org/pdf/1801.01944.pdfAudio Adversarial Examples: Targeted Attacks on Speech-to-Text, 5 Jan 2018]
 
 
 
Shen et al. [http://www.comp.nus.edu.sg/~shruti90/papers/auror.pdf AUROR: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems, 5 Dec 2016]
 
 
 
Hosseini et al. [http://arxiv.org/pdf/1703.04318.pdf Blocking Transferability of Adversarial Examples in Black-Box Learning Systems, 13 Mar 2017]
 
 
 
Raghunathan et al. [http://arxiv.org/pdf/1801.09344.pdf Certified Defenses against Adversarial Examples, 29 Jan 2018]
 
 
 
Rouhani et al. [http://arxiv.org/pdf/1709.02538.pdf CuRTAIL: ChaRacterizing and Thwarting AdversarIal deep Learning, 1 Apr 2018]
 
 
 
Paudice et al. [http://arxiv.org/pdf/1802.03041.pdf Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection, 8 Feb 2018]
 
 
 
Chen et al. [http://arxiv.org/pdf/1709.04114.pdf EAD. Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples, 10 Feb 2018]
 
 
 
Xu et al. [http://arxiv.org/pdf/1704.01155.pdf Feature Squeezing; Detecting Adversarial Examples in Deep Neural Networks, 5 Dec 2017]
 
 
 
Hu et al. [http://arxiv.org/pdf/1702.05983.pdf Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN, 20 Feb 2017]
 
 
 
Chen et al. [http://www.researchgate.net/publication/317576889_Hardening_Malware_Detection_Systems_Against_Cyber_Maneuvers_An_Adversarial_Machine_Learning_Approach Hardening Malware Detection Systems Against Cyber Maneuvers. An Adversarial Machine Learning Approach, 13 Oct 2017]
 
 
 
Demontis et al. [http://ceur-ws.org/Vol-1816/paper-11.pdf Infinity-Norm Support Vector Machines Against Adversarial Label Contamination, 2017]
 
 
 
Abhijith [http://abhijith.live/introduction-to-artificial-intelligence-for-security-professionals-book/ Introduction to Artificial intelligence for security professionals, 12 Aug 2017]
 
 
 
Anderson et al. [http://arxiv.org/pdf/1801.08917.pdf Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning, 26 Jan 2018]
 
 
 
Papernot et al. [http://arxiv.org/pdf/1702.06280.pdf On the (Statistical) Detection of Adversarial Examples, 21 Feb 2017]
 
 
 
Papernot et al. [http://arxiv.org/pdf/1602.02697.pdf Practical Black-Box Attacks against Machine Learning, 8 Feb 2016]
 
 
 
Bulo et al. [http://pralab.diee.unica.it/sites/default/files/bulo16-tnnls.pdf Randomized Prediction Games for Adversarial Machine Learning, 11 Nov 2017]
 
 
 
Kantchelian [http://pdfs.semanticscholar.org/4a8d/97172382144b9906e2cec69d3decb4188fb7.pdf Taming Evasions in Machine Learning Based Detection, 12 Aug 2016]
 
 
 
Chen et al. [http://arxiv.org/pdf/1712.05526.pdf Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning, 15 Dec 2017]
 
 
 
Goodfellow et al. [http://arxiv.org/pdf/1704.03453.pdf The Space of Transferable Adversarial Examples, 23 May 2017]
 
 
 
Akhtar et al. [http://arxiv.org/pdf/1801.00553.pdf Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey, 26 Feb 2018]
 
 
 
Abramson, Myriam [http://pdfs.semanticscholar.org/b2f7/69ddcf8cae594f39e839aa29b27b98f403ca.pdf Toward Adversarial Online Learning and the Science of Deceptive Machines, 13 Sep 2017]
 
 
 
Madry et al. [http://arxiv.org/pdf/1706.06083.pdf Towards Deep Learning Models Resistant to Adversarial Attacks, 19 Jun 2017]
 
 
 
Luo et al. [http://arxiv.org/pdf/1801.04693.pdf Towards Imperceptible and Robust Adversarial Example Attacks against Neural, 15 Jan 2018]
 
 
 
 
 
 
 
== Weaponizing Machine Learning ==
 
[http://www.youtube.com/results?search_query=~weaponizing+Deep+Learning+Artificial+Intelligence Youtube search...]
 
 
 
<youtube>wbRx18VZlYA</youtube>
 

Latest revision as of 19:33, 7 July 2023

Youtube search... ...Google search

______________________________________________________


Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. Myth: An attacker must have access to the model to generate adversarial examples. Fact: Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to. -Deep Learning Adversarial Examples – Clarifying Misconceptions | Goodfellow et al.


Data Poisoning

Youtube search... ...Google search

Data poisoning or model poisoning attacks involve polluting a machine learning model's training data. Data poisoning is considered an integrity attack because tampering with the training data impacts the model's ability to output correct predictions. Other types of attacks can be similarly classified based on their impact:

  • Confidentiality, where the attackers can infer potentially confidential information about the training data by feeding inputs to the model
  • Availability, where the attackers disguise their inputs to trick the model in order to evade correct classification
  • Replication, where attackers can reverse-engineer the model in order to replicate it and analyze it locally to prepare attacks or exploit it for their own financial gain

The difference between an attack that is meant to evade a model's prediction or classification and a poisoning attack is persistence: with poisoning, the attacker's goal is to get their inputs to be accepted as training data. The length of the attack also differs because it depends on the model's training cycle; it might take weeks for the attacker to achieve their poisoning goal.

Data poisoning can be achieved either in a blackbox scenario against classifiers that rely on user feedback to update their learning or in a whitebox scenario where the attacker gains access to the model and its private training data, possibly somewhere in the supply chain if the training data is collected from multiple sources. [How data poisoning attacks corrupt machine learning models. How data poisoning attacks corrupt machine learning models | Lucian Constantin - CSO



Side Channel Attack (SCA)


In computer security, a side-channel attack is any attack based on extra information that can be gathered because of the fundamental way a computer protocol or algorithm is implemented, rather than flaws in the design of the protocol or algorithm itself (e.g. flaws found in a cryptanalysis of a cryptographic algorithm) or minor, but potentially devastating, mistakes or oversights in the implementation. (Cryptanalysis also includes searching for side-channel attacks.) Timing information, power consumption, electromagnetic leaks, and sound are examples of extra information which could be exploited to facilitate side-channel attacks. - Wikipedia