Difference between revisions of "Prompt Injection Attack"
m |
m |
||
Line 13: | Line 13: | ||
* [[Assistants]] ... [[Hybrid Assistants]] ... [[Agents]] ... [[Negotiation]] ... [[Hugging_Face#HuggingGPT|HuggingGPT]] ... [[LangChain]] | * [[Assistants]] ... [[Hybrid Assistants]] ... [[Agents]] ... [[Negotiation]] ... [[Hugging_Face#HuggingGPT|HuggingGPT]] ... [[LangChain]] | ||
* [[Attention]] Mechanism ...[[Transformer]] Model ...[[Generative Pre-trained Transformer (GPT)]] | * [[Attention]] Mechanism ...[[Transformer]] Model ...[[Generative Pre-trained Transformer (GPT)]] | ||
− | * [[Generative AI]] ... [[OpenAI]]'s [[ChatGPT]] ... [[Perplexity]] ... [[Microsoft]]'s [[Bing]] ... [[You]] ...[[Google]]'s [[Bard]] ... [[Baidu]]'s [[Ernie]] | + | * [[Generative AI]] ... [[Conversational AI]] ... [[OpenAI]]'s [[ChatGPT]] ... [[Perplexity]] ... [[Microsoft]]'s [[Bing]] ... [[You]] ...[[Google]]'s [[Bard]] ... [[Baidu]]'s [[Ernie]] |
* [[Cybersecurity]] | * [[Cybersecurity]] | ||
* [https://simonwillison.net/2022/Sep/12/prompt-injection/ Prompt injection attacks against GPT-3 | Simon Willison's Weblog] | * [https://simonwillison.net/2022/Sep/12/prompt-injection/ Prompt injection attacks against GPT-3 | Simon Willison's Weblog] |
Revision as of 12:23, 15 April 2023
YouTube search... ...Google search
- Prompt Engineering (PE)
- Natural Language Processing (NLP) ...Generation ...LLM ...Tools & Services
- Assistants ... Hybrid Assistants ... Agents ... Negotiation ... HuggingGPT ... LangChain
- Attention Mechanism ...Transformer Model ...Generative Pre-trained Transformer (GPT)
- Generative AI ... Conversational AI ... OpenAI's ChatGPT ... Perplexity ... Microsoft's Bing ... You ...Google's Bard ... Baidu's Ernie
- Cybersecurity
- Prompt injection attacks against GPT-3 | Simon Willison's Weblog
- Adversarial Prompting | Elvis Saravia - dair.ai
...a new vulnerability that is affecting some AI/ML models and, in particular, certain types of language models using prompt-based learning. ... create a malicious input that made a language model change its expected behaviour. - Exploring Prompt Injection Attacks | NCC Group
Prompt injection is a family of related computer security exploits carried out by getting machine learning models (such as large language model) which were trained to follow human-given instructions to follow instructions provided by a malicious user, which stands in contrast to the intended operation of instruction-following systems, wherein the ML model is intended only to follow trusted instructions (prompt) provided by the ML model's operator. Around 2023, prompt injection was seen "in the wild" in minor exploits against ChatGPT and similar Chatbots, for example to reveal the hidden initial prompts of the systems, or to trick the Chatbot into participating in conversations that violate the Chatbot's content policy. Wikipedia
|
|
|
|