Difference between revisions of "Out-of-Distribution (OOD) Generalization"
m |
m |
||
| Line 12: | Line 12: | ||
* [[In-Context Learning (ICL)]] ... [[Context]] ... [[Out-of-Distribution (OOD) Generalization]] | * [[In-Context Learning (ICL)]] ... [[Context]] ... [[Out-of-Distribution (OOD) Generalization]] | ||
| + | * [[Singularity]] ... [[Artificial Consciousness / Sentience|Sentience]] ... [[Artificial General Intelligence (AGI)| AGI]] ... [[Inside Out - Curious Optimistic Reasoning| Curious Reasoning]] ... [[Emergence]] ... [[Moonshots]] ... [[Explainable / Interpretable AI|Explainable AI]] ... [[Algorithm Administration#Automated Learning|Automated Learning]] | ||
* [[Math for Intelligence#Mathematical Reasoning|Mathematical Reasoning]] | * [[Math for Intelligence#Mathematical Reasoning|Mathematical Reasoning]] | ||
| + | * [[Transfer Learning]] | ||
* [https://arxiv.org/abs/2108.13624 Towards Out-Of-Distribution Generalization: A Survey] | * [https://arxiv.org/abs/2108.13624 Towards Out-Of-Distribution Generalization: A Survey] | ||
* [https://arxiv.org/abs/2106.04496 Towards a Theoretical Framework of Out-of-Distribution Generalization] | * [https://arxiv.org/abs/2106.04496 Towards a Theoretical Framework of Out-of-Distribution Generalization] | ||
| Line 22: | Line 24: | ||
Out-of-Distribution (OOD) generalization refers to the ability of a machine learning model to generalize to new data that comes from a different distribution than the training data. This is a challenging problem because the testing distribution is unknown and different from the training distribution. There are several methods for improving out-of-distribution generalization. According to a survey on the topic, existing methods can be categorized into three parts based on their positions in the whole learning pipeline: unsupervised representation learning, supervised model learning and optimization. Another approach to out-of-distribution generalization is via learning domain-invariant features or hypothesis-invariant features. | Out-of-Distribution (OOD) generalization refers to the ability of a machine learning model to generalize to new data that comes from a different distribution than the training data. This is a challenging problem because the testing distribution is unknown and different from the training distribution. There are several methods for improving out-of-distribution generalization. According to a survey on the topic, existing methods can be categorized into three parts based on their positions in the whole learning pipeline: unsupervised representation learning, supervised model learning and optimization. Another approach to out-of-distribution generalization is via learning domain-invariant features or hypothesis-invariant features. | ||
| − | + | <youtube>Ugxj_6_Nzug</youtube> | |
| − | + | <youtube>CxUmPZMg858</youtube> | |
| − | + | <youtube>RL6OEC5Mcj0</youtube> | |
| − | + | <youtube>0hqDZ1JfuEA</youtube> | |
| − | |||
Revision as of 17:02, 27 May 2023
YouTube ... Quora ...Google search ...Google News ...Bing News
- In-Context Learning (ICL) ... Context ... Out-of-Distribution (OOD) Generalization
- Singularity ... Sentience ... AGI ... Curious Reasoning ... Emergence ... Moonshots ... Explainable AI ... Automated Learning
- Mathematical Reasoning
- Transfer Learning
- Towards Out-Of-Distribution Generalization: A Survey
- Towards a Theoretical Framework of Out-of-Distribution Generalization
- Out-of-Distribution Generalization via Risk Extrapolation
- [https://arxiv.org/abs/2210.10636 Using Interventions to Improve Out-of-Distribution Generalization of ....
- [https://link.springer.com/chapter/10.1007/978-3-030-92659-5_39 How Reliable Are Out-of-Distribution Generalization Methods for Medical ....
- Meta-Causal Feature Learning for Out-of-Distribution Generalization ...
Out-of-Distribution (OOD) generalization refers to the ability of a machine learning model to generalize to new data that comes from a different distribution than the training data. This is a challenging problem because the testing distribution is unknown and different from the training distribution. There are several methods for improving out-of-distribution generalization. According to a survey on the topic, existing methods can be categorized into three parts based on their positions in the whole learning pipeline: unsupervised representation learning, supervised model learning and optimization. Another approach to out-of-distribution generalization is via learning domain-invariant features or hypothesis-invariant features.