Difference between revisions of "Local Linear Embedding (LLE)"

From
Jump to: navigation, search
m (Text replacement - "http://" to "https://")
Line 5: Line 5:
 
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
 
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
 
}}
 
}}
[http://www.youtube.com/results?search_query=Local+Linear+Embedding YouTube search...]
+
[https://www.youtube.com/results?search_query=Local+Linear+Embedding YouTube search...]
[http://www.google.com/search?q=Local+Linear+Embedding+machine+learning+ML ...Google search]
+
[https://www.google.com/search?q=Local+Linear+Embedding+machine+learning+ML ...Google search]
  
 
* [[AI Solver]]
 
* [[AI Solver]]
Line 17: Line 17:
 
* [[Isomap]]
 
* [[Isomap]]
 
* [[Kernel Trick]]
 
* [[Kernel Trick]]
* [http://cs.nyu.edu/~roweis/lle/ Locally Linear Embedding | S.T. Roweis & L. K. Saul - NYU]
+
* [https://cs.nyu.edu/~roweis/lle/ Locally Linear Embedding | S.T. Roweis & L. K. Saul - NYU]
* [http://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction Nonlinear  dimensionality reduction | Wikipedia]
+
* [https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction Nonlinear  dimensionality reduction | Wikipedia]
  
begins by finding a set of the nearest neighbors of each point. It then computes a set of weights for each point that best describes the point as a linear combination of its neighbors. Finally, it uses an [http://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors]-based optimization technique to find the low-dimensional embedding of points, such that each point is still described with the same linear combination of its neighbors. LLE tends to handle non-uniform sample densities poorly because there is no fixed unit to prevent the weights from drifting as various regions differ in sample densities. LLE has no internal model. LLE was presented at approximately the same time as Isomap. It has several advantages over Isomap, including faster optimization when implemented to take advantage of sparse matrix algorithms, and better results with many problems
+
begins by finding a set of the nearest neighbors of each point. It then computes a set of weights for each point that best describes the point as a linear combination of its neighbors. Finally, it uses an [https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors]-based optimization technique to find the low-dimensional embedding of points, such that each point is still described with the same linear combination of its neighbors. LLE tends to handle non-uniform sample densities poorly because there is no fixed unit to prevent the weights from drifting as various regions differ in sample densities. LLE has no internal model. LLE was presented at approximately the same time as Isomap. It has several advantages over Isomap, including faster optimization when implemented to take advantage of sparse matrix algorithms, and better results with many problems
  
http://www.researchgate.net/publication/282773146/figure/fig2/AS:317438165045261@1452694564973/Steps-of-locally-linear-embedding-algorithm.png
+
https://www.researchgate.net/publication/282773146/figure/fig2/AS:317438165045261@1452694564973/Steps-of-locally-linear-embedding-algorithm.png
  
  

Revision as of 22:05, 28 March 2023

YouTube search... ...Google search

begins by finding a set of the nearest neighbors of each point. It then computes a set of weights for each point that best describes the point as a linear combination of its neighbors. Finally, it uses an [1]-based optimization technique to find the low-dimensional embedding of points, such that each point is still described with the same linear combination of its neighbors. LLE tends to handle non-uniform sample densities poorly because there is no fixed unit to prevent the weights from drifting as various regions differ in sample densities. LLE has no internal model. LLE was presented at approximately the same time as Isomap. It has several advantages over Isomap, including faster optimization when implemented to take advantage of sparse matrix algorithms, and better results with many problems

Steps-of-locally-linear-embedding-algorithm.png