Prompt Injection Attack

Jump to: navigation, search

YouTube search... ...Google search

...a new vulnerability that is affecting some AI/ML models and, in particular, certain types of language models using prompt-based learning. ... create a malicious input that made a language model change its expected behaviour. - Exploring Prompt Injection Attacks | NCC Group

Prompt injection is a family of related computer security exploits carried out by getting machine learning models (such as large language model) which were trained to follow human-given instructions to follow instructions provided by a malicious user, which stands in contrast to the intended operation of instruction-following systems, wherein the ML model is intended only to follow trusted instructions (prompt) provided by the ML model's operator. Around 2023, prompt injection was seen "in the wild" in minor exploits against ChatGPT and similar Chatbots, for example to reveal the hidden initial prompts of the systems, or to trick the Chatbot into participating in conversations that violate the Chatbot's content policy. Wikipedia

What is GPT-3 Prompt Injection & Prompt Leaking? AI Adversarial Attacks
In this video, we take a deeper look at GPT-3 or any Large Language Model's Prompt Injection & Prompt Leaking. These are security exploitation in Prompt Engineering. These are also AI Adversarial Attacks. The name Prompt Injection comes from the age-old SQL Injection where a malicious SQL script can be added to a web form to manipulate the underlying SQL query. In a similar fashion, Prompts can be altered to get abnormal results from a LLM or GPT-3 based Application.

GPT2 Unlimited-Length Generation with Hidden Prompt Injections - Code Review
Unlimited-Length Imagination Directed GPT2 Chained Generation by Overlapping Prompt-Injections. The same idea can be applied for any similar generative model with a prompt for producing more creative text and for changing the topic in a directed manner, which makes the text more interesting and original and less monotonous.

JailBreaking ChatGPT Meaning - JailBreak ChatGPT with DAN Explained
This video teaches you

  • 1. What's Jailbreaking in General?
  • 2. what's JailBreaking of ChatGPT means?
  • 3. JailBreaking Prompt explanation
  • 4. Jailbreaking ChatGPT with DAN "Do Anything Now"
  • 5. Prompt Injection
  • 6. Does Jail Breaking work or is it hallucinations?

Update: ChatGPT (GPT-3) Hack. AI Text Security Breach Found! Why it's Serious
We discuss a bug found in Artificial Intelligence (AI) language model, GPT-3. The weakness found is a common one found in other computer languages like SQL. This flaw was recently discovered & will probably be fixed in future releases. This hack also applies to the newly released ChatGPT.


  • 00:00 Introduction
  • 00:21 GPT-3
  • 00:37 Data Breach
  • 01:34 Bug Discovery
  • 05:27 Hacking Bot