Difference between revisions of "Chain of Thought (CoT)"
m (→Tree of Thoughts (ToT)) |
m |
||
(16 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
|title=PRIMO.ai | |title=PRIMO.ai | ||
|titlemode=append | |titlemode=append | ||
− | |keywords=artificial, intelligence, machine, learning, models | + | |keywords=ChatGPT, artificial, intelligence, machine, learning, GPT-4, GPT-5, NLP, NLG, NLC, NLU, models, data, singularity, moonshot, Sentience, AGI, Emergence, Moonshot, Explainable, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Hugging Face, OpenAI, Tensorflow, OpenAI, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Meta, LLM, metaverse, assistants, agents, digital twin, IoT, Transhumanism, Immersive Reality, Generative AI, Conversational AI, Perplexity, Bing, You, Bard, Ernie, prompt Engineering LangChain, Video/Image, Vision, End-to-End Speech, Synthesize Speech, Speech Recognition, Stanford, MIT |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools |
− | |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | + | |
+ | <!-- Google tag (gtag.js) --> | ||
+ | <script async src="https://www.googletagmanager.com/gtag/js?id=G-4GCWLBVJ7T"></script> | ||
+ | <script> | ||
+ | window.dataLayer = window.dataLayer || []; | ||
+ | function gtag(){dataLayer.push(arguments);} | ||
+ | gtag('js', new Date()); | ||
+ | |||
+ | gtag('config', 'G-4GCWLBVJ7T'); | ||
+ | </script> | ||
}} | }} | ||
[https://www.youtube.com/results?search_query=AI+Chain+Tree+Thought+Emergence YouTube] | [https://www.youtube.com/results?search_query=AI+Chain+Tree+Thought+Emergence YouTube] | ||
Line 11: | Line 20: | ||
[https://www.bing.com/news/search?q=AI+Chain+Tree+Thought+Emergence&qft=interval%3d%228%22 ...Bing News] | [https://www.bing.com/news/search?q=AI+Chain+Tree+Thought+Emergence&qft=interval%3d%228%22 ...Bing News] | ||
− | * | + | * [[Artificial General Intelligence (AGI) to Singularity]] ... [[Inside Out - Curious Optimistic Reasoning| Curious Reasoning]] ... [[Emergence]] ... [[Moonshots]] ... [[Explainable / Interpretable AI|Explainable AI]] ... [[Algorithm Administration#Automated Learning|Automated Learning]] |
+ | * [[In-Context Learning (ICL)]] | ||
+ | * [[Math for Intelligence#Mathematical Reasoning|Mathematical Reasoning]] | ||
+ | |||
+ | AI can generate text that follows a logical and coherent sequence of ideas, building on previous statements to form a chain of thought. Chain of thought (CoT) is a method that breaks a problem down into a series of intermediate reasoning steps. It has significantly improved the ability of [[Large Language Model (LLM)]]s to perform complex reasoning. It is the current state-of-the-art in teaching LLMs how to take action. An example of CoT prompting can be seen in solving a simple word problem. Without CoT prompting, GPT-3 (davinci-003) fails to solve the problem. However, with CoT prompting, GPT-3 (davinci-003) successfully solves the same problem by breaking it down into intermediate reasoning steps. | ||
+ | |||
+ | <youtube>sYKU9zC5RKs</youtube> | ||
+ | <youtube>0Z1ZwY2K2-M</youtube> | ||
− | + | = <span id="Multimodal Chain-of-Thought Reasoning"></span>Multimodal Chain-of-Thought Reasoning = | |
+ | <youtube>9ukx00o8vYw</youtube> | ||
+ | <youtube>0o36MRJdG-0</youtube> | ||
Line 20: | Line 38: | ||
* [https://arxiv.org/abs/2305.08291 2305.08291 Large Language Model Guided Tree-of-Thought | Jieyi Long -arXiv.org] | * [https://arxiv.org/abs/2305.08291 2305.08291 Large Language Model Guided Tree-of-Thought | Jieyi Long -arXiv.org] | ||
* [https://arxiv.org/abs/2305.10601 2305.10601 Tree of Thoughts: Deliberate Problem Solving with Large | S. Yao, D. Yu, J. Zhao, I. Shafran, T. Griffiths, Y. Cao, K. Narasimhanar - Xiv.org] | * [https://arxiv.org/abs/2305.10601 2305.10601 Tree of Thoughts: Deliberate Problem Solving with Large | S. Yao, D. Yu, J. Zhao, I. Shafran, T. Griffiths, Y. Cao, K. Narasimhanar - Xiv.org] | ||
− | * [https://the-decoder.com/system-2-inspired-method-enhances-gpt-4s-logic-capability GPT-4's logic capabilities can be enhanced with a "Tree of Thoughts" | + | * [https://the-decoder.com/system-2-inspired-method-enhances-gpt-4s-logic-capability GPT-4's logic capabilities can be enhanced with a "Tree of Thoughts" | Maximilian Schreiner - The Decoder] |
− | |||
− | "Tree of Thoughts" is a new framework for inferencing language models like [[GPT-4]], inspired by prompt engineering methods like Chain of Thought. It is a novel approach aimed at improving the problem-solving capabilities of auto-regressive [[Large Language Model (LLM)]]s by allowing them to explore multiple reasoning paths over thoughts. To implement ToT as a software system, an [[Large Language Model (LLM)|LLM]] is augmented with additional modules including a prompter agent, a checker module, a memory module, and a ToT controller. These modules engage in a multi-round conversation with the [[Large Language Model (LLM)|LLM]] to solve a given problem. The memory module records the conversation and state history of the problem-solving process, which allows the system to backtrack to previous steps of the thought-process and explore other directions from there. | + | "Tree of Thoughts" is a new framework for inferencing language models like [[GPT-4]], inspired by prompt engineering methods like Chain of Thought. It is a novel approach aimed at improving the problem-solving capabilities of auto-regressive [[Large Language Model (LLM)]]s by allowing them to explore multiple reasoning paths over thoughts. To implement ToT as a software system, an [[Large Language Model (LLM)|LLM]] is augmented with additional modules including a prompter agent, a checker module, a [[memory]] module, and a ToT controller. These modules engage in a multi-round conversation with the [[Large Language Model (LLM)|LLM]] to solve a given problem. The [[memory]] module records the conversation and state history of the problem-solving process, which allows the system to backtrack to previous steps of the thought-process and explore other directions from there. |
+ | <youtube>PFK5g_kxhVM</youtube> | ||
+ | <youtube>RndhsZvr-cI</youtube> | ||
+ | <youtube>ut5kp56wW_4</youtube> | ||
+ | <youtube>BrjAt-wvEXI</youtube> | ||
+ | <youtube>bjnTy2TdmYw</youtube> | ||
<youtube>G9dRK9TNAeg</youtube> | <youtube>G9dRK9TNAeg</youtube> | ||
+ | |||
+ | = <span id="Chain of Thought (CoT) meets Instruction Fine-Tuning"></span>Chain of Thought (CoT) meets Instruction [[Fine-Tuning]] = | ||
+ | * [[Few Shot Learning#Zero-Shot Prompting| Zero-Shot Prompting]] | ||
+ | * [[Fine-tuning#Implementing Chain of Thought (CoT) |Implementing Chain of Thought (CoT) to Fine-tune models]] | ||
+ | |||
+ | Chain of Thought (CoT) meets Instruction Fine-Tuning is a new approach to [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]] [[Large Language Model (LLM)]] that combines the benefits of CoT prompting and instruction [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]]. | ||
+ | |||
+ | CoT prompting is a technique for enabling [[Large Language Model (LLM)|LLM]]s to perform multi-step reasoning by prompting them to generate a step-by-step explanation of their reasoning process. Instruction [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]] is a technique for [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]] [[Large Language Model (LLM)|LLM]]s to perform specific tasks by providing them with explicit instructions. | ||
+ | |||
+ | The CoT meets Instruction [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]] approach involves [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]] an [[Large Language Model (LLM)|LLM]] on a dataset of CoT demonstrations that have been generated using instruction [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]]. This approach has several advantages over traditional [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]] techniques: | ||
+ | |||
+ | * Improved performance on reasoning tasks. CoT prompting has been shown to improve LLM performance on reasoning tasks by a large margin. This is because CoT prompting helps the model to understand the reasoning process and to generate more accurate and contextually relevant outputs. | ||
+ | * Reduced data requirements. [[Fine-tuning#Implementing Chain of Thought (CoT) |Fine-tuning]] a model with CoT demonstrations requires much less data than [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]] with task-specific examples. This is because the CoT demonstrations provide the model with the necessary information to solve the task, even if the model has never seen the task before. | ||
+ | * Improved generalization ability. CoT [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuned]] models have been shown to generalize better to new tasks and datasets than task-specific [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuned]] models. This is because CoT [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuned]] models learn to solve tasks in a more generalizable way, rather than learning to solve specific examples. | ||
+ | |||
+ | |||
+ | Overall, the CoT meets Instruction Fine-Tuning approach is a promising new approach to [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]] [[Large Language Model (LLM)|LLM]]s that can improve their performance on reasoning tasks, reduce data requirements, and improve generalization ability. | ||
+ | |||
+ | Here is an example of how the CoT meets Instruction [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]] approach could be used to [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tune]] an [[Large Language Model (LLM)|LLM]] to solve the following math problem: | ||
+ | |||
+ | Problem: What is the average of 10, 20, and 30? | ||
+ | |||
+ | First, we would use instruction [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]] to generate a set of CoT demonstrations for the problem. For example, we could prompt the [[Large Language Model (LLM)|LLM]] with the following instruction: | ||
+ | |||
+ | Instruction: Explain step-by-step how to calculate the average of 10, 20, and 30. | ||
+ | |||
+ | The [[Large Language Model (LLM)|LLM]] would then generate a step-by-step explanation of how to solve the problem, similar to the following: | ||
+ | |||
+ | CoT demonstration: | ||
+ | |||
+ | # Start by adding all three numbers together: 10 + 20 + 30 = 60. | ||
+ | # Then, divide the sum by the number of numbers: 60 / 3 = 20. | ||
+ | # Therefore, the average of 10, 20, and 30 is 20. | ||
+ | |||
+ | Once we have generated a set of CoT demonstrations, we can [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tune]] the [[Large Language Model (LLM)|LLM]] on them using any standard [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]] technique. For example, we could use [[Supervised|Supervised Learning]] to train the [[Large Language Model (LLM)|LLM]] to predict the next step in a CoT demonstration sequence. | ||
+ | |||
+ | After [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]], the [[Large Language Model (LLM)|LLM]] should be able to solve the average problem step-by-step, even if it has never seen the problem before. This is because the CoT [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]] has taught the [[Large Language Model (LLM)|LLM]] to solve the problem in a generalizable way. | ||
+ | |||
+ | The CoT meets Instruction [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]] approach is a powerful and versatile technique for [[Fine-tuning#Implementing Chain of Thought (CoT) |fine-tuning]] [[Large Language Model (LLM)|LLM]]s to perform a wide range of tasks, including reasoning tasks. It is especially useful for tasks where it is difficult or expensive to obtain labeled training data, or for tasks where the [[Large Language Model (LLM)|LLM]] needs to be able to generalize to new tasks and datasets. | ||
+ | |||
+ | <youtube>-AZL31jop9Y</youtube> |
Latest revision as of 08:29, 3 March 2024
YouTube ... Quora ...Google search ...Google News ...Bing News
- Artificial General Intelligence (AGI) to Singularity ... Curious Reasoning ... Emergence ... Moonshots ... Explainable AI ... Automated Learning
- In-Context Learning (ICL)
- Mathematical Reasoning
AI can generate text that follows a logical and coherent sequence of ideas, building on previous statements to form a chain of thought. Chain of thought (CoT) is a method that breaks a problem down into a series of intermediate reasoning steps. It has significantly improved the ability of Large Language Model (LLM)s to perform complex reasoning. It is the current state-of-the-art in teaching LLMs how to take action. An example of CoT prompting can be seen in solving a simple word problem. Without CoT prompting, GPT-3 (davinci-003) fails to solve the problem. However, with CoT prompting, GPT-3 (davinci-003) successfully solves the same problem by breaking it down into intermediate reasoning steps.
Multimodal Chain-of-Thought Reasoning
Tree of Thoughts (ToT)
- PT-4's logic capabilities can be enhanced with a "Tree of Thoughts"
- 2305.08291 Large Language Model Guided Tree-of-Thought | Jieyi Long -arXiv.org
- 2305.10601 Tree of Thoughts: Deliberate Problem Solving with Large | S. Yao, D. Yu, J. Zhao, I. Shafran, T. Griffiths, Y. Cao, K. Narasimhanar - Xiv.org
- GPT-4's logic capabilities can be enhanced with a "Tree of Thoughts" | Maximilian Schreiner - The Decoder
"Tree of Thoughts" is a new framework for inferencing language models like GPT-4, inspired by prompt engineering methods like Chain of Thought. It is a novel approach aimed at improving the problem-solving capabilities of auto-regressive Large Language Model (LLM)s by allowing them to explore multiple reasoning paths over thoughts. To implement ToT as a software system, an LLM is augmented with additional modules including a prompter agent, a checker module, a memory module, and a ToT controller. These modules engage in a multi-round conversation with the LLM to solve a given problem. The memory module records the conversation and state history of the problem-solving process, which allows the system to backtrack to previous steps of the thought-process and explore other directions from there.
Chain of Thought (CoT) meets Instruction Fine-Tuning
Chain of Thought (CoT) meets Instruction Fine-Tuning is a new approach to fine-tuning Large Language Model (LLM) that combines the benefits of CoT prompting and instruction fine-tuning.
CoT prompting is a technique for enabling LLMs to perform multi-step reasoning by prompting them to generate a step-by-step explanation of their reasoning process. Instruction fine-tuning is a technique for fine-tuning LLMs to perform specific tasks by providing them with explicit instructions.
The CoT meets Instruction fine-tuning approach involves fine-tuning an LLM on a dataset of CoT demonstrations that have been generated using instruction fine-tuning. This approach has several advantages over traditional fine-tuning techniques:
- Improved performance on reasoning tasks. CoT prompting has been shown to improve LLM performance on reasoning tasks by a large margin. This is because CoT prompting helps the model to understand the reasoning process and to generate more accurate and contextually relevant outputs.
- Reduced data requirements. Fine-tuning a model with CoT demonstrations requires much less data than fine-tuning with task-specific examples. This is because the CoT demonstrations provide the model with the necessary information to solve the task, even if the model has never seen the task before.
- Improved generalization ability. CoT fine-tuned models have been shown to generalize better to new tasks and datasets than task-specific fine-tuned models. This is because CoT fine-tuned models learn to solve tasks in a more generalizable way, rather than learning to solve specific examples.
Overall, the CoT meets Instruction Fine-Tuning approach is a promising new approach to fine-tuning LLMs that can improve their performance on reasoning tasks, reduce data requirements, and improve generalization ability.
Here is an example of how the CoT meets Instruction fine-tuning approach could be used to fine-tune an LLM to solve the following math problem:
Problem: What is the average of 10, 20, and 30?
First, we would use instruction fine-tuning to generate a set of CoT demonstrations for the problem. For example, we could prompt the LLM with the following instruction:
Instruction: Explain step-by-step how to calculate the average of 10, 20, and 30.
The LLM would then generate a step-by-step explanation of how to solve the problem, similar to the following:
CoT demonstration:
- Start by adding all three numbers together: 10 + 20 + 30 = 60.
- Then, divide the sum by the number of numbers: 60 / 3 = 20.
- Therefore, the average of 10, 20, and 30 is 20.
Once we have generated a set of CoT demonstrations, we can fine-tune the LLM on them using any standard fine-tuning technique. For example, we could use Supervised Learning to train the LLM to predict the next step in a CoT demonstration sequence.
After fine-tuning, the LLM should be able to solve the average problem step-by-step, even if it has never seen the problem before. This is because the CoT fine-tuning has taught the LLM to solve the problem in a generalizable way.
The CoT meets Instruction fine-tuning approach is a powerful and versatile technique for fine-tuning LLMs to perform a wide range of tasks, including reasoning tasks. It is especially useful for tasks where it is difficult or expensive to obtain labeled training data, or for tasks where the LLM needs to be able to generalize to new tasks and datasets.