Difference between revisions of "Local Linear Embedding (LLE)"

From
Jump to: navigation, search
m
 
(11 intermediate revisions by the same user not shown)
Line 5: Line 5:
 
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
 
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
 
}}
 
}}
[http://www.youtube.com/results?search_query=Local+Linear+Embedding YouTube search...]
+
[https://www.youtube.com/results?search_query=Local+Linear+Embedding YouTube search...]
[http://www.google.com/search?q=Local+Linear+Embedding+machine+learning+ML ...Google search]
+
[https://www.google.com/search?q=Local+Linear+Embedding+machine+learning+ML ...Google search]
  
* [[AI Solver]]
+
* [[AI Solver]] ... [[Algorithms]] ... [[Algorithm Administration|Administration]] ... [[Model Search]] ... [[Discriminative vs. Generative]] ... [[Train, Validate, and Test]]
* [[...find outliers]]
+
* [[Embedding]] ... [[Fine-tuning]] ... [[Retrieval-Augmented Generation (RAG)|RAG]] ... [[Agents#AI-Powered Search|Search]] ... [[Clustering]] ... [[Recommendation]] ... [[Anomaly Detection]] ... [[Classification]] ... [[Dimensional Reduction]].  [[...find outliers]]
* [[Anomaly Detection]]
+
* [[Backpropagation]] ... [[Feed Forward Neural Network (FF or FFNN)|FFNN]] ... [[Forward-Forward]] ... [[Activation Functions]] ...[[Softmax]] ... [[Loss]] ... [[Boosting]] ... [[Gradient Descent Optimization & Challenges|Gradient Descent]] ... [[Algorithm Administration#Hyperparameter|Hyperparameter]] ... [[Manifold Hypothesis]] ... [[Principal Component Analysis (PCA)|PCA]]
 +
** [[T-Distributed Stochastic Neighbor Embedding (t-SNE)]]
 
* [[Dimensional Reduction]]
 
* [[Dimensional Reduction]]
* [[Principal Component Analysis (PCA)]]
 
* [[T-Distributed Stochastic Neighbor Embedding (t-SNE)]]
 
 
* [[Isomap]]
 
* [[Isomap]]
* [[Kernel Trick]]
+
* [[Math for Intelligence]] ... [[Finding Paul Revere]] ... [[Social Network Analysis (SNA)]] ... [[Dot Product]] ... [[Kernel Trick]]
* [http://cs.nyu.edu/~roweis/lle/ Locally Linear Embedding | S.T. Roweis & L. K. Saul - NYU]
+
* [https://cs.nyu.edu/~roweis/lle/ Locally Linear Embedding | S.T. Roweis & L. K. Saul - NYU]
* [http://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction Nonlinear  dimensionality reduction | Wikipedia]
+
* [https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction Nonlinear  dimensionality reduction | Wikipedia]
  
Embedding...
+
begins by finding a set of the nearest neighbors of each point. It then computes a set of weights for each point that best describes the point as a linear combination of its neighbors. Finally, it uses an [https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors]-based optimization technique to find the low-dimensional embedding of points, such that each point is still described with the same linear combination of its neighbors. LLE tends to handle non-uniform sample densities poorly because there is no fixed unit to prevent the weights from drifting as various regions differ in sample densities. LLE has no internal model. LLE was presented at approximately the same time as Isomap. It has several advantages over Isomap, including faster optimization when implemented to take advantage of sparse matrix algorithms, and better results with many problems
* projecting an input into another more convenient representation space. For example we can project (embed) faces into a space in which face matching can be more reliable.  
 
* a mapping of a discrete — categorical — variable to a vector of continuous numbers. In the context of neural networks, embeddings are low-dimensional, learned continuous vector representations of discrete variables. Neural network embeddings are useful because they can reduce the dimensionality of categorical variables and meaningfully represent categories in the transformed space. [http://towardsdatascience.com/neural-network-embeddings-explained-4d028e6f0526 Neural Network Embeddings Explained | Will Koehrsen - Towards Data Science]
 
  
Neural network embeddings have 3 primary purposes:
+
https://www.researchgate.net/publication/282773146/figure/fig2/AS:317438165045261@1452694564973/Steps-of-locally-linear-embedding-algorithm.png
# Finding nearest neighbors in the embedding space. These can be used to make recommendations based on user interests or cluster categories.
 
# As input to a machine learning model for a supervised task.
 
# For visualization of concepts and relations between categories.
 
 
 
 
 
begins by finding a set of the nearest neighbors of each point. It then computes a set of weights for each point that best describes the point as a linear combination of its neighbors. Finally, it uses an [http://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors]-based optimization technique to find the low-dimensional embedding of points, such that each point is still described with the same linear combination of its neighbors. LLE tends to handle non-uniform sample densities poorly because there is no fixed unit to prevent the weights from drifting as various regions differ in sample densities. LLE has no internal model. LLE was presented at approximately the same time as Isomap. It has several advantages over Isomap, including faster optimization when implemented to take advantage of sparse matrix algorithms, and better results with many problems
 
 
 
http://www.researchgate.net/publication/282773146/figure/fig2/AS:317438165045261@1452694564973/Steps-of-locally-linear-embedding-algorithm.png
 
  
  

Latest revision as of 22:59, 5 March 2024

YouTube search... ...Google search

begins by finding a set of the nearest neighbors of each point. It then computes a set of weights for each point that best describes the point as a linear combination of its neighbors. Finally, it uses an [1]-based optimization technique to find the low-dimensional embedding of points, such that each point is still described with the same linear combination of its neighbors. LLE tends to handle non-uniform sample densities poorly because there is no fixed unit to prevent the weights from drifting as various regions differ in sample densities. LLE has no internal model. LLE was presented at approximately the same time as Isomap. It has several advantages over Isomap, including faster optimization when implemented to take advantage of sparse matrix algorithms, and better results with many problems

Steps-of-locally-linear-embedding-algorithm.png