Difference between revisions of "Chain of Thought (CoT)"
(Created page with "{{#seo: |title=PRIMO.ai |titlemode=append |keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, TensorFlow, Google, Nvidia, M...") |
m (→Tree of Thoughts) |
||
Line 16: | Line 16: | ||
− | = Tree of Thoughts = | + | = <span id="Tree of Thoughts (ToT)"></span>Tree of Thoughts (ToT) = |
+ | * [https://bing.com/search?q=Tree+of+Thoughts PT-4's logic capabilities can be enhanced with a "Tree of Thoughts"] | ||
+ | * [https://arxiv.org/abs/2305.08291 [2305.08291] Large Language Model Guided Tree-of-Thought | arXiv.org] | ||
+ | * [https://arxiv.org/abs/2305.10601 [2305.10601] Tree of Thoughts: Deliberate Problem Solving with Large | arXiv.org] | ||
+ | * [https://the-decoder.com/system-2-inspired-method-enhances-gpt-4s-logic-capability GPT-4's logic capabilities can be enhanced with a "Tree of Thoughts"] | ||
+ | * [https://arxiv.org/pdf/2305.10601.pdf Tree of Thoughts: Deliberate Problem Solving with Large Language Models | arXiv.org] | ||
− | "Tree of Thoughts" is a new framework for inferencing language models like GPT-4, inspired by prompt engineering methods like Chain of Thought. It is a novel approach aimed at improving the problem-solving capabilities of auto-regressive | + | "Tree of Thoughts" is a new framework for inferencing language models like [[GPT-4]], inspired by prompt engineering methods like Chain of Thought. It is a novel approach aimed at improving the problem-solving capabilities of auto-regressive [[Large Language Model (LLM)]]s by allowing them to explore multiple reasoning paths over thoughts. To implement ToT as a software system, an [[Large Language Model (LLM)|LLM]] is augmented with additional modules including a prompter agent, a checker module, a memory module, and a ToT controller. These modules engage in a multi-round conversation with the [[Large Language Model (LLM)|LLM]] to solve a given problem. The memory module records the conversation and state history of the problem-solving process, which allows the system to backtrack to previous steps of the thought-process and explore other directions from there. |
− | |||
− | |||
− | ( | ||
− | |||
− | ( | ||
− | |||
− |
Revision as of 12:06, 25 May 2023
YouTube ... Quora ...Google search ...Google News ...Bing News
- Singularity ... Sentience ... AGI ... Curious Reasoning ... Emergence ... Moonshots ... Explainable AI ... Automated Learning
AI can generate text that follows a logical and coherent sequence of ideas, building on previous statements to form a chain of thought.
Tree of Thoughts (ToT)
- PT-4's logic capabilities can be enhanced with a "Tree of Thoughts"
- [2305.08291 Large Language Model Guided Tree-of-Thought | arXiv.org]
- [2305.10601 Tree of Thoughts: Deliberate Problem Solving with Large | arXiv.org]
- GPT-4's logic capabilities can be enhanced with a "Tree of Thoughts"
- Tree of Thoughts: Deliberate Problem Solving with Large Language Models | arXiv.org
"Tree of Thoughts" is a new framework for inferencing language models like GPT-4, inspired by prompt engineering methods like Chain of Thought. It is a novel approach aimed at improving the problem-solving capabilities of auto-regressive Large Language Model (LLM)s by allowing them to explore multiple reasoning paths over thoughts. To implement ToT as a software system, an LLM is augmented with additional modules including a prompter agent, a checker module, a memory module, and a ToT controller. These modules engage in a multi-round conversation with the LLM to solve a given problem. The memory module records the conversation and state history of the problem-solving process, which allows the system to backtrack to previous steps of the thought-process and explore other directions from there.