Difference between revisions of "Manifold Hypothesis"

From
Jump to: navigation, search
m
m
Line 24: Line 24:
  
 
<hr>
 
<hr>
 
  
 
The Manifold Hypothesis explains (heuristically) why machine learning techniques are able to find useful features and produce accurate predictions from datasets that have a potentially large number of dimensions ( variables).    The fact that the actual data set of interest actually lives on in a space of low dimension, means that a given machine learning model only needs to learn to focus on a few key features of the dataset to make decisions.  However these key features may turn out to be complicated functions of the original variables.  Many of the algorithms behind machine learning techniques focus on ways to determine these (embedding) functions.  [http://deepai.org/machine-learning-glossary-and-terms/manifold-hypothesis#:~:text=The%20Manifold%20Hypothesis%20states%20that,within%20the%20high%2Ddimensional%20space What is the Manifold Hypothesis? | DeepAI]
 
The Manifold Hypothesis explains (heuristically) why machine learning techniques are able to find useful features and produce accurate predictions from datasets that have a potentially large number of dimensions ( variables).    The fact that the actual data set of interest actually lives on in a space of low dimension, means that a given machine learning model only needs to learn to focus on a few key features of the dataset to make decisions.  However these key features may turn out to be complicated functions of the original variables.  Many of the algorithms behind machine learning techniques focus on ways to determine these (embedding) functions.  [http://deepai.org/machine-learning-glossary-and-terms/manifold-hypothesis#:~:text=The%20Manifold%20Hypothesis%20states%20that,within%20the%20high%2Ddimensional%20space What is the Manifold Hypothesis? | DeepAI]

Revision as of 09:38, 3 September 2020

Youtube search... ...Google search




The Manifold Hypothesis states that real-world high-dimensional data (images, neural activity) lie on low-dimensional manifolds manifolds embedded within the high-dimensional space. ...manifolds are topological spaces that look locally like Euclidean spaces.



The Manifold Hypothesis explains (heuristically) why machine learning techniques are able to find useful features and produce accurate predictions from datasets that have a potentially large number of dimensions ( variables). The fact that the actual data set of interest actually lives on in a space of low dimension, means that a given machine learning model only needs to learn to focus on a few key features of the dataset to make decisions. However these key features may turn out to be complicated functions of the original variables. Many of the algorithms behind machine learning techniques focus on ways to determine these (embedding) functions. What is the Manifold Hypothesis? | DeepAI

1*XKw-_oCSDmswEY-6E3exeQ.png

Manifold Learning and Dimensionality Reduction for Data Visualization... - Stefan Kühn
Dimensionality Reduction methods like PCA - Principal Component Analysis - are widely used in Machine Learning for a variety of tasks. But besides the well-known standard methods there are a lot more tools available, especially in the context of Manifold Learning. We will interactively explore these tools and present applications for Data Visualization and Feature Engineering using scikit-learn.

My understanding of the Manifold Hypothesis | Machine learning
Kartik C