Difference between revisions of "Manifold Hypothesis"

From
Jump to: navigation, search
m
m
Line 18: Line 18:
 
The Manifold Hypothesis states that real-world high-dimensional data lie on low-dimensional [http://en.wikipedia.org/wiki/Manifold manifolds]
 
The Manifold Hypothesis states that real-world high-dimensional data lie on low-dimensional [http://en.wikipedia.org/wiki/Manifold manifolds]
 
manifolds embedded within the high-dimensional space.
 
manifolds embedded within the high-dimensional space.
 
  
 
<hr>
 
<hr>
Line 29: Line 28:
 
{| class="wikitable" style="width: 550px;"
 
{| class="wikitable" style="width: 550px;"
 
||
 
||
<youtube>ID1</youtube>
+
<youtube>j8080l9Pvic</youtube>
<b>HH1
+
<b>Manifold Learning and Dimensionality Reduction for Data Visualization... - Stefan Kühn
</b><br>BB1
+
</b><br>Dimensionality Reduction methods like PCA - Principal Component Analysis - are widely used in Machine Learning for a variety of tasks. But besides the well-known standard methods there are a lot more tools available, especially in the context of Manifold Learning. We will interactively explore these tools and present applications for Data Visualization and Feature Engineering using scikit-learn.
 
|}
 
|}
 
|<!-- M -->
 
|<!-- M -->

Revision as of 09:10, 3 September 2020

Youtube search... ...Google search



The Manifold Hypothesis states that real-world high-dimensional data lie on low-dimensional manifolds manifolds embedded within the high-dimensional space.



The Manifold Hypothesis explains (heuristically) why machine learning techniques are able to find useful features and produce accurate predictions from datasets that have a potentially large number of dimensions ( variables). The fact that the actual data set of interest actually lives on in a space of low dimension, means that a given machine learning model only needs to learn to focus on a few key features of the dataset to make decisions. However these key features may turn out to be complicated functions of the original variables. Many of the algorithms behind machine learning techniques focus on ways to determine these (embedding) functions. What is the Manifold Hypothesis? | DeepAI

Manifold Learning and Dimensionality Reduction for Data Visualization... - Stefan Kühn
Dimensionality Reduction methods like PCA - Principal Component Analysis - are widely used in Machine Learning for a variety of tasks. But besides the well-known standard methods there are a lot more tools available, especially in the context of Manifold Learning. We will interactively explore these tools and present applications for Data Visualization and Feature Engineering using scikit-learn.

My understanding of the Manifold Hypothesis | Machine learning
Kartik C

HH3
BB3

HH4
BB4