Difference between revisions of "Policy Gradient (PG)"
m |
|||
Line 14: | Line 14: | ||
* [[Gradient Descent Optimization & Challenges]] | * [[Gradient Descent Optimization & Challenges]] | ||
* [[Policy]] | * [[Policy]] | ||
− | * [[Assistants]] ... [[Hybrid Assistants]] ... [[Agents]] ... [[Negotiation]] ... [[LangChain]] | + | * [[Assistants]] ... [[Hybrid Assistants]] ... [[Agents]] ... [[Negotiation]] ... [[Hugging_Face#HuggingGPT|HuggingGPT]] ... [[LangChain]] |
* [[Generative AI]] ... [[OpenAI]]'s [[ChatGPT]] ... [[Perplexity]] ... [[Microsoft]]'s [[Bing]] ... [[You]] ...[[Google]]'s [[Bard]] ... [[Baidu]]'s [[Ernie]] | * [[Generative AI]] ... [[OpenAI]]'s [[ChatGPT]] ... [[Perplexity]] ... [[Microsoft]]'s [[Bing]] ... [[You]] ...[[Google]]'s [[Bard]] ... [[Baidu]]'s [[Ernie]] | ||
Revision as of 16:02, 9 April 2023
Youtube search... ...Google search
- Policy vs Plan
- Trust Region Policy Optimization (TRPO)
- Proximal Policy Optimization (PPO)
- Reinforcement Learning (RL)
- Gradient Descent Optimization & Challenges
- Policy
- Assistants ... Hybrid Assistants ... Agents ... Negotiation ... HuggingGPT ... LangChain
- Generative AI ... OpenAI's ChatGPT ... Perplexity ... Microsoft's Bing ... You ...Google's Bard ... Baidu's Ernie