Difference between revisions of "Supervised"

From
Jump to: navigation, search
m
m
 
(14 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
|title=PRIMO.ai
 
|title=PRIMO.ai
 
|titlemode=append
 
|titlemode=append
|keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, Tensorflow, Google, Nvidia, Microsoft, Azure, Amazon, AWS  
+
|keywords=ChatGPT, artificial, intelligence, machine, learning, GPT-4, GPT-5, NLP, NLG, NLC, NLU, models, data, singularity, moonshot, Sentience, AGI, Emergence, Moonshot, Explainable, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Hugging Face, OpenAI, Tensorflow, OpenAI, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Meta, LLM, metaverse, assistants, agents, digital twin, IoT, Transhumanism, Immersive Reality, Generative AI, Conversational AI, Perplexity, Bing, You, Bard, Ernie, prompt Engineering LangChain, Video/Image, Vision, End-to-End Speech, Synthesize Speech, Speech Recognition, Stanford, MIT |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
+
 
 +
<!-- Google tag (gtag.js) -->
 +
<script async src="https://www.googletagmanager.com/gtag/js?id=G-4GCWLBVJ7T"></script>
 +
<script>
 +
  window.dataLayer = window.dataLayer || [];
 +
  function gtag(){dataLayer.push(arguments);}
 +
  gtag('js', new Date());
 +
 
 +
  gtag('config', 'G-4GCWLBVJ7T');
 +
</script>
 
}}
 
}}
 
[http://www.youtube.com/results?search_query=supervised+Machine+Learning YouTube search...]
 
[http://www.youtube.com/results?search_query=supervised+Machine+Learning YouTube search...]
 
[http://www.google.com/search?q=supervised+deep+machine+learning+ML+artificial+intelligence ...Google search]
 
[http://www.google.com/search?q=supervised+deep+machine+learning+ML+artificial+intelligence ...Google search]
  
* [[Learning Techniques]]
+
* [[Supervised|Supervised Learning]] ... [[Semi-Supervised]] ... [[Self-Supervised]] ... [[Unsupervised]]
** [[Unsupervised]] Learning
+
* [[Learning Techniques]]
** [[Self-Supervised]] Learning
+
* [[Process Supervision]]
* [[Capabilities]]  
+
* [[Video/Image]] ... [[Vision]] ... [[Enhancement]] ... [[Fake]] ... [[Reconstruction]] ... [[Colorize]] ... [[Occlusions]] ... [[Predict image]] ... [[Image/Video Transfer Learning]]
** [[Video/Image]] ... [[Vision]] ... [[Colorize]] ... [[Image/Video Transfer Learning]]
+
* [[End-to-End Speech]] ... [[Synthesize Speech]] ... [[Speech Recognition]] ... [[Music]]
** [[End-to-End Speech]] ... [[Synthesize Speech]] ... [[Speech Recognition]] ... [[Music]]
+
* [[What is Artificial Intelligence (AI)? | Artificial Intelligence (AI)]] ... [[Machine Learning (ML)]] ... [[Deep Learning]] ... [[Neural Network]] ... [[Reinforcement Learning (RL)|Reinforcement]] ... [[Learning Techniques]]
* [[What is Artificial Intelligence (AI)? | Artificial Intelligence (AI)]] ... [[Machine Learning (ML)]] ... [[Deep Learning]] ... [[Neural Network]] ...[[Learning Techniques]]
+
* [[Average-Stochastic Gradient Descent (SGD) Weight-Dropped LSTM (AWD-LSTM)]]
 +
* [[Agents]] ... [[Robotic Process Automation (RPA)|Robotic Process Automation]] ... [[Assistants]] ... [[Personal Companions]] ... [[Personal Productivity|Productivity]] ... [[Email]] ... [[Negotiation]] ... [[LangChain]]
 +
* [[Large Language Model (LLM)]] ... [[Natural Language Processing (NLP)]]  ...[[Natural Language Generation (NLG)|Generation]] ... [[Natural Language Classification (NLC)|Classification]] ...  [[Natural Language Processing (NLP)#Natural Language Understanding (NLU)|Understanding]] ... [[Language Translation|Translation]] ...  [[Natural Language Tools & Services|Tools & Services]]
 +
* [[Sequence to Sequence (Seq2Seq)]]
 +
* [[Recurrent Neural Network (RNN)]] 
 +
* [[Long Short-Term Memory (LSTM)]]
 +
* [[Bidirectional Encoder Representations from Transformers (BERT)]]  ... a better model, but less investment than the larger [[OpenAI]] organization
 
* [http://www.unite.ai/supervised-vs-unsupervised-learning/ Supervised vs Unsupervised Learning | Daniel Nelson - Unite.ai]
 
* [http://www.unite.ai/supervised-vs-unsupervised-learning/ Supervised vs Unsupervised Learning | Daniel Nelson - Unite.ai]
* [[Average-Stochastic Gradient Descent (SGD) Weight-Dropped LSTM (AWD-LSTM)]]
 
 
* [http://en.wikipedia.org/wiki/Supervised_learning Supervised Learning | Wikipedia]
 
* [http://en.wikipedia.org/wiki/Supervised_learning Supervised Learning | Wikipedia]
* [[Assistants]]  ... [[Agents]]  ... [[Negotiation]] ... [[LangChain]]
 
* [[Generative AI]]  ... [[Conversational AI]] ... [[OpenAI]]'s [[ChatGPT]] ... [[Perplexity]]  ... [[Microsoft]]'s [[Bing]] ... [[You]] ...[[Google]]'s [[Bard]] ... [[Baidu]]'s [[Ernie]]
 
* [[Large Language Model (LLM)]] ... [[Natural Language Processing (NLP)]]  ...[[Natural Language Generation (NLG)|Generation]] ... [[Natural Language Classification (NLC)|Classification]] ...  [[Natural Language Processing (NLP)#Natural Language Understanding (NLU)|Understanding]] ... [[Language Translation|Translation]] ...  [[Natural Language Tools & Services|Tools & Services]]
 
 
* [https://www.technologyreview.com/2023/02/08/1068068/chatgpt-is-everywhere-heres-where-it-came-from/ ChatGPT is everywhere. Here’s where it came from | Will Douglas Heaven - MIT Technology Review]
 
* [https://www.technologyreview.com/2023/02/08/1068068/chatgpt-is-everywhere-heres-where-it-came-from/ ChatGPT is everywhere. Here’s where it came from | Will Douglas Heaven - MIT Technology Review]
** [[Sequence to Sequence (Seq2Seq)]]
+
* [[What is Artificial Intelligence (AI)? | Artificial Intelligence (AI)]] ... [[Generative AI]] ... [[Machine Learning (ML)]] ... [[Deep Learning]] ... [[Neural Network]] ... [[Reinforcement Learning (RL)|Reinforcement]] ... [[Learning Techniques]]
** [[Recurrent Neural Network (RNN)]]  
+
* [[Conversational AI]] ... [[ChatGPT]] | [[OpenAI]] ... [[Bing/Copilot]] | [[Microsoft]] ... [[Gemini]] | [[Google]] ... [[Claude]] | [[Anthropic]] ... [[Perplexity]] ... [[You]] ... [[phind]] ... [[Ernie]] | [[Baidu]]:
** [[Long Short-Term Memory (LSTM)]]
+
** [[Attention]] Mechanism  ...[[Transformer]] ...[[Generative Pre-trained Transformer (GPT)]] ... [[Generative Adversarial Network (GAN)|GAN]] ... [[Bidirectional Encoder Representations from Transformers (BERT)|BERT]]
** [[Bidirectional Encoder Representations from Transformers (BERT)]] ... a better model, but less investment than the larger [[OpenAI]] organization
+
** [[Reinforcement Learning (RL) from Human Feedback (RLHF)]]
** [[ChatGPT]] | [[OpenAI]]:
+
** [[Policy]]  ... [[Policy vs Plan]] ... [[Constitutional AI]] ... [[Trust Region Policy Optimization (TRPO)]] ... [[Policy Gradient (PG)]] ... [[Proximal Policy Optimization (PPO)]]
*** [[Attention]] Mechanism  ...[[Transformer]] ...[[Generative Pre-trained Transformer (GPT)]] ... [[Generative Adversarial Network (GAN)|GAN]] ... [[Bidirectional Encoder Representations from Transformers (BERT)|BERT]]
 
*** [[Reinforcement Learning (RL) from Human Feedback (RLHF)]]
 
*** [[Supervised]] Learning
 
*** [[Policy]]  ... [[Policy vs Plan]] ... [[Constitutional AI]] ... [[Trust Region Policy Optimization (TRPO)]] ... [[Policy Gradient (PG)]] ... [[Proximal Policy Optimization (PPO)]]
 
 
 
  
 
<b>Supervised Learning</b> a type of [[Machine Learning (ML)]] where a computer is trained to make [[...predict values|prediction]]s based on labeled examples, learning a function that maps input to output. It is used in applications like [[Vision|image]] and [[Speech Recognition]], and [[Natural Language Processing (NLP)]]. In supervised learning, you provide a training data set with answers, such as a set of pictures of animals along with the names of the animals. The goal of that training would be a model that could correctly identify a picture (of a kind of animal that was included in the training set) that it had not previously seen. Training and evaluation turn supervised learning algorithms into models by optimizing their parameters to find the set of values that best matches the ground truth of your data. The algorithms often rely on variants of steepest descent for their [[optimizer]]s, for example stochastic gradient descent (SGD), which is essentially steepest descent performed multiple times from randomized starting points. Common refinements on SGD add factors that correct the direction of the gradient based on momentum or adjust the learning rate based on progress from one pass through the data (called an epoch) to the next. [http://www.infoworld.com/article/3394399/machine-learning-algorithms-explained.html Machine learning algorithms explained | Martin Heller - InfoWorld]
 
<b>Supervised Learning</b> a type of [[Machine Learning (ML)]] where a computer is trained to make [[...predict values|prediction]]s based on labeled examples, learning a function that maps input to output. It is used in applications like [[Vision|image]] and [[Speech Recognition]], and [[Natural Language Processing (NLP)]]. In supervised learning, you provide a training data set with answers, such as a set of pictures of animals along with the names of the animals. The goal of that training would be a model that could correctly identify a picture (of a kind of animal that was included in the training set) that it had not previously seen. Training and evaluation turn supervised learning algorithms into models by optimizing their parameters to find the set of values that best matches the ground truth of your data. The algorithms often rely on variants of steepest descent for their [[optimizer]]s, for example stochastic gradient descent (SGD), which is essentially steepest descent performed multiple times from randomized starting points. Common refinements on SGD add factors that correct the direction of the gradient based on momentum or adjust the learning rate based on progress from one pass through the data (called an epoch) to the next. [http://www.infoworld.com/article/3394399/machine-learning-algorithms-explained.html Machine learning algorithms explained | Martin Heller - InfoWorld]
Line 46: Line 52:
 
http://s3.amazonaws.com/static2.simplilearn.com/ice9/free_resources_article_thumb/Machine_Learning_2.jpg
 
http://s3.amazonaws.com/static2.simplilearn.com/ice9/free_resources_article_thumb/Machine_Learning_2.jpg
  
<youtube>bQI5uDxrFfA</youtube>
+
<youtube>lDPP0pkBHjQ</youtube>
 
<youtube>1AVrWvRvfxs</youtube>
 
<youtube>1AVrWvRvfxs</youtube>
 
<youtube>5Gp5xLbiAr0</youtube>
 
<youtube>5Gp5xLbiAr0</youtube>

Latest revision as of 09:08, 23 March 2024

YouTube search... ...Google search

Supervised Learning a type of Machine Learning (ML) where a computer is trained to make predictions based on labeled examples, learning a function that maps input to output. It is used in applications like image and Speech Recognition, and Natural Language Processing (NLP). In supervised learning, you provide a training data set with answers, such as a set of pictures of animals along with the names of the animals. The goal of that training would be a model that could correctly identify a picture (of a kind of animal that was included in the training set) that it had not previously seen. Training and evaluation turn supervised learning algorithms into models by optimizing their parameters to find the set of values that best matches the ground truth of your data. The algorithms often rely on variants of steepest descent for their optimizers, for example stochastic gradient descent (SGD), which is essentially steepest descent performed multiple times from randomized starting points. Common refinements on SGD add factors that correct the direction of the gradient based on momentum or adjust the learning rate based on progress from one pass through the data (called an epoch) to the next. Machine learning algorithms explained | Martin Heller - InfoWorld

This kind of learning is possible when inputs and the outputs are clearly identified, and algorithms are trained using labeled examples. To understand this better, let’s consider the following example: an equipment could have data points labeled F (failed) or R (runs). Machine Learning: What it is and Why it Matters | Priyadharshini @ simplilearn

There are two main types of supervised learning problems: they are classification that involves predicting a class label and regression that involves predicting a numerical value.

  • Classification: Supervised learning problem that involves predicting a class label.
  • Regression: Supervised learning problem that involves predicting a numerical label.

Both classification and regression problems may have one or more input variables and input variables may be any data type, such as numerical or categorical. 14 Different Types of Learning in Machine Learning | Jason Brownlee - Machine Learning Mastery

Machine_Learning_2.jpg

Active Learning

YouTube search... ...Google search

Active learning makes training a supervised model an iterative process. The model trains on an initial subset of labeled data from a large dataset. Then, it tries to make predictions on the rest of the unlabeled data based on what it has learned. ML engineers evaluate how certain the model is in its predictions and, by using a variety of acquisition functions, can quantify the performance benefit added by annotating one of the unlabeled samples. By expressing uncertainty in its predictions, the model is deciding for itself what additional data will be most useful for its training. In doing so, it asks annotators to provide more examples of only that specific type of data so that it can train more intensively on that subset during its next round of training. Think of it like quizzing a student to figure out where their knowledge gap is. Once you know what problems they are missing, you can provide them with textbooks, presentations and other materials so that they can target their learning to better understand that particular aspect of the subject. With active learning, training a model moves from being a linear process to a circular one with a strong feedback loop. - Active learning is the future of generative AI: Here’s how to leverage it | Eric Landau - TechCrunch