Difference between revisions of "Prompt Injection Attack"
m |
m |
||
Line 20: | Line 20: | ||
...a new vulnerability that is affecting some AI/ML models and, in particular, certain types of language models using prompt-based learning. ... create a malicious input that made a language model change its expected behaviour. - [https://research.nccgroup.com/2022/12/05/exploring-prompt-injection-attacks/ Exploring Prompt Injection Attacks | NCC Group] | ...a new vulnerability that is affecting some AI/ML models and, in particular, certain types of language models using prompt-based learning. ... create a malicious input that made a language model change its expected behaviour. - [https://research.nccgroup.com/2022/12/05/exploring-prompt-injection-attacks/ Exploring Prompt Injection Attacks | NCC Group] | ||
+ | |||
+ | {|<!-- T --> | ||
+ | | valign="top" | | ||
+ | {| class="wikitable" style="width: 550px;" | ||
+ | || | ||
+ | <youtube>mB4m9rCxUSo</youtube> | ||
+ | <b>What is GPT-3 Prompt Injection & Prompt Leaking? AI Adversarial Attacks | ||
+ | </b><br>In this video, we take a deeper look at GPT-3 or any Large Language Model's Prompt Injection & Prompt Leaking. These are security exploitation in Prompt Engineering. These are also AI Adversarial Attacks. The name Prompt Injection comes from the age-old SQL Injection where a malicious SQL script can be added to a web form to manipulate the underlying SQL query. In a similar fashion, Prompts can be altered to get abnormal results from a LLM or GPT-3 based Application. | ||
+ | |} | ||
+ | |<!-- M --> | ||
+ | | valign="top" | | ||
+ | {| class="wikitable" style="width: 550px;" | ||
+ | || | ||
+ | <youtube>b0ai_LiRRvM</youtube> | ||
+ | <b>Perplexity - AI Chat based Conversational Search Engine | ||
+ | </b><br>Perplexity AI is an answer engine that delivers accurate answers to complex questions using large language models. | ||
+ | Try it - Perplexity.ai | ||
+ | |} | ||
+ | |}<!-- B --> |
Revision as of 08:14, 18 February 2023
YouTube search... ...Google search
- Human-Machine Interaction (HMI) Engineering
- Assistants ... Hybrid Assistants ... Agents ... Negotiation
- Similar conversation/search tools:
- Prompt injection attacks against GPT-3 | Simon Willison's Weblog
...a new vulnerability that is affecting some AI/ML models and, in particular, certain types of language models using prompt-based learning. ... create a malicious input that made a language model change its expected behaviour. - Exploring Prompt Injection Attacks | NCC Group
|
|