Difference between revisions of "Constitutional AI"
m |
m |
||
Line 22: | Line 22: | ||
Some benefits of using Constitutional AI include allowing a model to explain why it is refusing to provide an answer, improving transparency of AI decision making, and controlling AI behavior more precisely with fewer human labels. | Some benefits of using Constitutional AI include allowing a model to explain why it is refusing to provide an answer, improving transparency of AI decision making, and controlling AI behavior more precisely with fewer human labels. | ||
+ | |||
+ | The Constitutional AI methodology has two phases, similar to the one we highlighted in our article on RLHF. | ||
+ | |||
+ | # The Supervised Learning Phase. | ||
+ | # The Reinforcement Learning Phase. |
Revision as of 14:26, 16 April 2023
YouTube ... Quora ...Google search ...Google News ...Bing News
- Reinforcement Learning (RL)
- Assistants ... Hybrid Assistants ... Agents ... Negotiation ... HuggingGPT ... LangChain
- Generative AI ... Conversational AI ... OpenAI's ChatGPT ... Perplexity ... Microsoft's Bing ... You ...Google's Bard ... Baidu's Ernie
- Reinforcement Learning (RL) from Human Feedback (RLHF)
- Claude | Anthropic
- Meet Claude: Anthropic’s Rival to ChatGPT | Riley Goodside - Scale
- Paper Review: Constitutional AI, Training LLMs using Principles
Constitutional AI is a method for training AI systems using a set of rules or principles that act as a “constitution” for the AI system. This approach allows the AI system to operate within a societally accepted framework and aligns it with human intentions1.
Some benefits of using Constitutional AI include allowing a model to explain why it is refusing to provide an answer, improving transparency of AI decision making, and controlling AI behavior more precisely with fewer human labels.
The Constitutional AI methodology has two phases, similar to the one we highlighted in our article on RLHF.
- The Supervised Learning Phase.
- The Reinforcement Learning Phase.