Difference between revisions of "Prompt Injection Attack"

From
Jump to: navigation, search
m
m
Line 8: Line 8:
 
[https://www.google.com/search?q=Prompt+Injection+Attackchatbot+assistant+artificial+intelligence+deep+machine+learning ...Google search]
 
[https://www.google.com/search?q=Prompt+Injection+Attackchatbot+assistant+artificial+intelligence+deep+machine+learning ...Google search]
  
* [[Human-Machine Interaction (HMI) Engineering]]
+
* [[Prompt Engineering]]
 
* [[Assistants]] ... [[Hybrid Assistants]]  ... [[Agents]]  ... [[Negotiation]]  
 
* [[Assistants]] ... [[Hybrid Assistants]]  ... [[Agents]]  ... [[Negotiation]]  
 
* Similar conversation/search tools:
 
* Similar conversation/search tools:

Revision as of 08:17, 18 February 2023

YouTube search... ...Google search


...a new vulnerability that is affecting some AI/ML models and, in particular, certain types of language models using prompt-based learning. ... create a malicious input that made a language model change its expected behaviour. - Exploring Prompt Injection Attacks | NCC Group

What is GPT-3 Prompt Injection & Prompt Leaking? AI Adversarial Attacks
In this video, we take a deeper look at GPT-3 or any Large Language Model's Prompt Injection & Prompt Leaking. These are security exploitation in Prompt Engineering. These are also AI Adversarial Attacks. The name Prompt Injection comes from the age-old SQL Injection where a malicious SQL script can be added to a web form to manipulate the underlying SQL query. In a similar fashion, Prompts can be altered to get abnormal results from a LLM or GPT-3 based Application.

Perplexity - AI Chat based Conversational Search Engine
Perplexity AI is an answer engine that delivers accurate answers to complex questions using large language models. Try it - Perplexity.ai