Difference between revisions of "Manifold Hypothesis"
m |
m |
||
Line 8: | Line 8: | ||
[http://www.google.com/search?q=Backpropagation+deep+machine+learning+ML ...Google search] | [http://www.google.com/search?q=Backpropagation+deep+machine+learning+ML ...Google search] | ||
+ | * [[Principal Component Analysis (PCA)]] | ||
* [[Backpropagation]] | * [[Backpropagation]] | ||
* [[Gradient Descent Optimization & Challenges]] | * [[Gradient Descent Optimization & Challenges]] |
Revision as of 09:11, 3 September 2020
Youtube search... ...Google search
- Principal Component Analysis (PCA)
- Backpropagation
- Gradient Descent Optimization & Challenges
- Objective vs. Cost vs. Loss vs. Error Function
- Manifold Wikipedia
The Manifold Hypothesis states that real-world high-dimensional data lie on low-dimensional manifolds
manifolds embedded within the high-dimensional space.
The Manifold Hypothesis explains (heuristically) why machine learning techniques are able to find useful features and produce accurate predictions from datasets that have a potentially large number of dimensions ( variables). The fact that the actual data set of interest actually lives on in a space of low dimension, means that a given machine learning model only needs to learn to focus on a few key features of the dataset to make decisions. However these key features may turn out to be complicated functions of the original variables. Many of the algorithms behind machine learning techniques focus on ways to determine these (embedding) functions. What is the Manifold Hypothesis? | DeepAI
|
|
|
|