Difference between revisions of "Prompt Injection Attack"
m |
m |
||
Line 38: | Line 38: | ||
<b>GPT2 Unlimited-Length Generation with Hidden Prompt Injections - Code Review | <b>GPT2 Unlimited-Length Generation with Hidden Prompt Injections - Code Review | ||
</b><br>Unlimited-Length Imagination Directed GPT2 Chained Generation by Overlapping Prompt-Injections. The same idea can be applied for any similar generative model with a prompt for producing more creative text and for changing the topic in a directed manner, which makes the text more interesting and original and less monotonous. | </b><br>Unlimited-Length Imagination Directed GPT2 Chained Generation by Overlapping Prompt-Injections. The same idea can be applied for any similar generative model with a prompt for producing more creative text and for changing the topic in a directed manner, which makes the text more interesting and original and less monotonous. | ||
+ | |} | ||
+ | |}<!-- B --> | ||
+ | |||
+ | {|<!-- T --> | ||
+ | | valign="top" | | ||
+ | {| class="wikitable" style="width: 550px;" | ||
+ | || | ||
+ | <youtube>vJjPKhPzbPE</youtube> | ||
+ | <b>JailBreaking ChatGPT Meaning - JailBreak ChatGPT with DAN Explained | ||
+ | </b><br>This video teaches you | ||
+ | 1. What's Jailbreaking in General? | ||
+ | 2. what's JailBreaking of ChatGPT means? | ||
+ | 3. JailBreaking Prompt explanation | ||
+ | 4. Jailbreaking ChatGPT with DAN "Do Anything Now" | ||
+ | 5. Prompt Injection | ||
+ | 6. Does Jail Breaking work or is it hallucinations? | ||
+ | |} | ||
+ | |<!-- M --> | ||
+ | | valign="top" | | ||
+ | {| class="wikitable" style="width: 550px;" | ||
+ | || | ||
+ | <youtube>NS1M2DX_IUk</youtube> | ||
+ | <b>Update: ChatGPT (GPT-3) Hack. AI Text Security Breach Found! Why it's Serious | ||
+ | </b><br>We discuss a bug found in Artificial Intelligence (AI) language model, GPT-3. The weakness found is a common one found in other computer languages like SQL. This flaw was recently discovered & will probably be fixed in future releases. This hack also applies to the newly released ChatGPT. | ||
+ | |||
+ | Chapters: | ||
+ | * 00:00 Introduction | ||
+ | * 00:21 GPT-3 | ||
+ | * 00:37 Data Breach | ||
+ | * 01:34 Bug Discovery | ||
+ | * 05:27 Hacking Bot | ||
|} | |} | ||
|}<!-- B --> | |}<!-- B --> |
Revision as of 21:20, 18 February 2023
YouTube search... ...Google search
- Prompt Engineering
- Assistants ... Hybrid Assistants ... Agents ... Negotiation
- Similar conversation/search tools:
- Prompt injection attacks against GPT-3 | Simon Willison's Weblog
...a new vulnerability that is affecting some AI/ML models and, in particular, certain types of language models using prompt-based learning. ... create a malicious input that made a language model change its expected behaviour. - Exploring Prompt Injection Attacks | NCC Group
Prompt injection is a family of related computer security exploits carried out by getting machine learning models (such as large language model) which were trained to follow human-given instructions to follow instructions provided by a malicious user, which stands in contrast to the intended operation of instruction-following systems, wherein the ML model is intended only to follow trusted instructions (prompt) provided by the ML model's operator. Around 2023, prompt injection was seen "in the wild" in minor exploits against ChatGPT and similar chatbots, for example to reveal the hidden initial prompts of the systems, or to trick the chatbot into participating in conversations that violate the chatbot's content policy. Wikipedia
|
|
|
|