Difference between revisions of "Out-of-Distribution (OOD) Generalization"

From
Jump to: navigation, search
m
m
Line 12: Line 12:
  
 
* [[In-Context Learning (ICL)]] ... [[Context]] ... [[Out-of-Distribution (OOD) Generalization]]
 
* [[In-Context Learning (ICL)]] ... [[Context]] ... [[Out-of-Distribution (OOD) Generalization]]
 +
* [[Singularity]] ... [[Artificial Consciousness / Sentience|Sentience]] ... [[Artificial General Intelligence (AGI)| AGI]] ... [[Inside Out - Curious Optimistic Reasoning| Curious Reasoning]] ... [[Emergence]] ... [[Moonshots]] ... [[Explainable / Interpretable AI|Explainable AI]] ...  [[Algorithm Administration#Automated Learning|Automated Learning]]
 
* [[Math for Intelligence#Mathematical Reasoning|Mathematical Reasoning]]
 
* [[Math for Intelligence#Mathematical Reasoning|Mathematical Reasoning]]
 +
* [[Transfer Learning]]
 
* [https://arxiv.org/abs/2108.13624 Towards Out-Of-Distribution Generalization: A Survey]
 
* [https://arxiv.org/abs/2108.13624 Towards Out-Of-Distribution Generalization: A Survey]
 
* [https://arxiv.org/abs/2106.04496 Towards a Theoretical Framework of Out-of-Distribution Generalization]
 
* [https://arxiv.org/abs/2106.04496 Towards a Theoretical Framework of Out-of-Distribution Generalization]
Line 22: Line 24:
 
Out-of-Distribution (OOD) generalization refers to the ability of a machine learning model to generalize to new data that comes from a different distribution than the training data. This is a challenging problem because the testing distribution is unknown and different from the training distribution. There are several methods for improving out-of-distribution generalization. According to a survey on the topic, existing methods can be categorized into three parts based on their positions in the whole learning pipeline: unsupervised representation learning, supervised model learning and optimization. Another approach to out-of-distribution generalization is via learning domain-invariant features or hypothesis-invariant features.
 
Out-of-Distribution (OOD) generalization refers to the ability of a machine learning model to generalize to new data that comes from a different distribution than the training data. This is a challenging problem because the testing distribution is unknown and different from the training distribution. There are several methods for improving out-of-distribution generalization. According to a survey on the topic, existing methods can be categorized into three parts based on their positions in the whole learning pipeline: unsupervised representation learning, supervised model learning and optimization. Another approach to out-of-distribution generalization is via learning domain-invariant features or hypothesis-invariant features.
  
Source: Conversation with Bing, 5/27/2023
+
<youtube>Ugxj_6_Nzug</youtube>
(1) Teaching Algorithmic Reasoning via In-context Learning - arXiv.org. https://arxiv.org/pdf/2211.09066.pdf.
+
<youtube>CxUmPZMg858</youtube>
(2) Algorithmic prompting or how to teach math to a large language model. https://the-decoder.com/how-to-teach-math-to-a-large-language-model/.
+
<youtube>RL6OEC5Mcj0</youtube>
(3) 7 Examples of Algorithms in Everyday Life for Students. https://www.learning.com/blog/7-examples-of-algorithms-in-everyday-life-for-students/.
+
<youtube>0hqDZ1JfuEA</youtube>
(4) How to write the Algorithm step by step? - Programming-point. http://programming-point.com/algorithm-step-by-step/.
 

Revision as of 17:02, 27 May 2023

YouTube ... Quora ...Google search ...Google News ...Bing News

Out-of-Distribution (OOD) generalization refers to the ability of a machine learning model to generalize to new data that comes from a different distribution than the training data. This is a challenging problem because the testing distribution is unknown and different from the training distribution. There are several methods for improving out-of-distribution generalization. According to a survey on the topic, existing methods can be categorized into three parts based on their positions in the whole learning pipeline: unsupervised representation learning, supervised model learning and optimization. Another approach to out-of-distribution generalization is via learning domain-invariant features or hypothesis-invariant features.