Difference between revisions of "Out-of-Distribution (OOD) Generalization"
m |
m |
||
| Line 20: | Line 20: | ||
* [https://link.springer.com/chapter/10.1007/978-3-030-92659-5_39 How Reliable Are Out-of-Distribution Generalization Methods for Medical Image Segmentation? | A. Sanner, C. González & A. Mukhopadhyay] | * [https://link.springer.com/chapter/10.1007/978-3-030-92659-5_39 How Reliable Are Out-of-Distribution Generalization Methods for Medical Image Segmentation? | A. Sanner, C. González & A. Mukhopadhyay] | ||
* [https://link.springer.com/chapter/10.1007/978-3-031-25075-0_36 Meta-Causal Feature Learning for Out-of-Distribution Generalization | Y. Wang, X. Li, Z. Qi, J. Li, X. Li, X. Meng & L. Meng] | * [https://link.springer.com/chapter/10.1007/978-3-031-25075-0_36 Meta-Causal Feature Learning for Out-of-Distribution Generalization | Y. Wang, X. Li, Z. Qi, J. Li, X. Li, X. Meng & L. Meng] | ||
| + | |||
Out-of-Distribution (OOD) generalization refers to the ability of a machine learning model to generalize to new data that comes from a different distribution than the training data. This is a challenging problem because the testing distribution is unknown and different from the training distribution. There are several methods for improving out-of-distribution generalization. According to a survey on the topic, existing methods can be categorized into three parts based on their positions in the whole learning pipeline: unsupervised representation learning, supervised model learning and optimization. Another approach to out-of-distribution generalization is via learning domain-invariant features or hypothesis-invariant features. | Out-of-Distribution (OOD) generalization refers to the ability of a machine learning model to generalize to new data that comes from a different distribution than the training data. This is a challenging problem because the testing distribution is unknown and different from the training distribution. There are several methods for improving out-of-distribution generalization. According to a survey on the topic, existing methods can be categorized into three parts based on their positions in the whole learning pipeline: unsupervised representation learning, supervised model learning and optimization. Another approach to out-of-distribution generalization is via learning domain-invariant features or hypothesis-invariant features. | ||
Revision as of 17:49, 27 May 2023
YouTube ... Quora ...Google search ...Google News ...Bing News
- In-Context Learning (ICL) ... Context ... Causation vs. Correlation ... Autocorrelation ... Out-of-Distribution (OOD) Generalization ... Transfer Learning
- Singularity ... Sentience ... AGI ... Curious Reasoning ... Emergence ... Moonshots ... Explainable AI ... Automated Learning
- Mathematical Reasoning
- Towards Out-Of-Distribution Generalization: A Survey | Z. Shen, J. Liu, Y. He, X. Zhang, R. Xu, H. Yu, P. Cui - arXiv - Cornell University
- Towards a Theoretical Framework of Out-of-Distribution Generalization | H. Ye, C. Xie, T. Cai, R. Li, Z. Li, L. Wang - arXiv - Cornell University
- Out-of-Distribution Generalization via Risk Extrapolation | D. Krueger, E. Caballero, J. Jacobsen, A. Zhang, Jonathan Binas, D. Zhang, R. Le Priol, A. Courville
- Using Interventions to Improve Out-of-Distribution Generalization of Text-Matching Recommendation Systems | P. Bansal, Y. Prabhu, E. Kiciman, A. Sharma
- How Reliable Are Out-of-Distribution Generalization Methods for Medical Image Segmentation? | A. Sanner, C. González & A. Mukhopadhyay
- Meta-Causal Feature Learning for Out-of-Distribution Generalization | Y. Wang, X. Li, Z. Qi, J. Li, X. Li, X. Meng & L. Meng
Out-of-Distribution (OOD) generalization refers to the ability of a machine learning model to generalize to new data that comes from a different distribution than the training data. This is a challenging problem because the testing distribution is unknown and different from the training distribution. There are several methods for improving out-of-distribution generalization. According to a survey on the topic, existing methods can be categorized into three parts based on their positions in the whole learning pipeline: unsupervised representation learning, supervised model learning and optimization. Another approach to out-of-distribution generalization is via learning domain-invariant features or hypothesis-invariant features.
Difference Between In-Context Learning and OOD Generalization
In-Context Learning (ICL) refers to the ability of a machine learning model to learn from a few examples provided in the context of a task, without any fine-tuning. This is also known as few-shot learning or zero-shot learning.
Out-of-distribution (OOD) generalization, on the other hand, refers to the ability of a machine learning model to generalize to new data that comes from a different distribution than the training data1.
The main difference between In-Context Learning (ICL) and OOD generalization is that in-context learning focuses on the ability of a model to learn from a few examples provided in the context of a task, while OOD generalization focuses on the ability of a model to generalize to new data that comes from a different distribution than the training data.