Difference between revisions of "Local Linear Embedding (LLE)"

From
Jump to: navigation, search
(Created page with "[http://www.youtube.com/results?search_query=Kernel+Approximation YouTube search...] [http://www.google.com/search?q=Kernel+Approximation+machine+learning+ML ...Google search]...")
 
m
 
(22 intermediate revisions by the same user not shown)
Line 1: Line 1:
[http://www.youtube.com/results?search_query=Kernel+Approximation YouTube search...]
+
{{#seo:
[http://www.google.com/search?q=Kernel+Approximation+machine+learning+ML ...Google search]
+
|title=PRIMO.ai
 +
|titlemode=append
 +
|keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, Tensorflow, Google, Nvidia, Microsoft, Azure, Amazon, AWS
 +
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools
 +
}}
 +
[https://www.youtube.com/results?search_query=Local+Linear+Embedding YouTube search...]
 +
[https://www.google.com/search?q=Local+Linear+Embedding+machine+learning+ML ...Google search]
  
* [[AI Solver]]
+
* [[AI Solver]] ... [[Algorithms]] ... [[Algorithm Administration|Administration]] ... [[Model Search]] ... [[Discriminative vs. Generative]] ... [[Train, Validate, and Test]]
* [[...find outliers]]
+
* [[Embedding]] ... [[Fine-tuning]] ... [[Retrieval-Augmented Generation (RAG)|RAG]] ... [[Agents#AI-Powered Search|Search]] ... [[Clustering]] ... [[Recommendation]] ... [[Anomaly Detection]] ... [[Classification]] ... [[Dimensional Reduction]].  [[...find outliers]]
* [[Anomaly Detection]]
+
* [[Backpropagation]] ... [[Feed Forward Neural Network (FF or FFNN)|FFNN]] ... [[Forward-Forward]] ... [[Activation Functions]] ...[[Softmax]] ... [[Loss]] ... [[Boosting]] ... [[Gradient Descent Optimization & Challenges|Gradient Descent]] ... [[Algorithm Administration#Hyperparameter|Hyperparameter]] ... [[Manifold Hypothesis]] ... [[Principal Component Analysis (PCA)|PCA]]
* [[Dimensional Reduction Algorithms]]
+
** [[T-Distributed Stochastic Neighbor Embedding (t-SNE)]]
* [[Principal Component Analysis (PCA)]]
+
* [[Dimensional Reduction]]
* [[T-Distributed Stochastic Neighbor Embedding (t-SNE)]]
 
 
* [[Isomap]]
 
* [[Isomap]]
* [[Local Linear Embedding]]
+
* [[Math for Intelligence]] ... [[Finding Paul Revere]] ... [[Social Network Analysis (SNA)]] ... [[Dot Product]] ... [[Kernel Trick]]
* [[Kernel Approximation]]
+
* [https://cs.nyu.edu/~roweis/lle/ Locally Linear Embedding | S.T. Roweis & L. K. Saul - NYU]
* [http://en.wikipedia.org/wiki/Kernel_method Kernel method | Wikipedia]
+
* [https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction Nonlinear  dimensionality reduction | Wikipedia]
* [http://staff.ustc.edu.cn/~cheneh/paper_pdf/2017/Chu-Guan-Neurocomputing.pdf Efficient karaoke song recommendation via multiple kernel learning approximation | C. Guana, Y. Fub, X. Luc, E. Chena, X. Li, and H. Xiong]
 
  
 +
begins by finding a set of the nearest neighbors of each point. It then computes a set of weights for each point that best describes the point as a linear combination of its neighbors. Finally, it uses an [https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors]-based optimization technique to find the low-dimensional embedding of points, such that each point is still described with the same linear combination of its neighbors. LLE tends to handle non-uniform sample densities poorly because there is no fixed unit to prevent the weights from drifting as various regions differ in sample densities. LLE has no internal model. LLE was presented at approximately the same time as Isomap. It has several advantages over Isomap, including faster optimization when implemented to take advantage of sparse matrix algorithms, and better results with many problems
  
The word "kernel" is used in mathematics to denote a weighting function for a weighted sum or integral.
+
https://www.researchgate.net/publication/282773146/figure/fig2/AS:317438165045261@1452694564973/Steps-of-locally-linear-embedding-algorithm.png
  
functions that approximate the feature mappings that correspond to certain kernels, as they are used for example in [[Support Vector Machine (SVM)]]. The feature functions perform non-linear transformations of the input, which can serve as a basis for linear classification or other algorithms. The advantage of using approximate explicit feature maps compared to the kernel trick, which makes use of feature maps implicitly, is that explicit mappings can be better suited for online learning and can significantly reduce the cost of learning with very large datasets. Standard kernelized [[Support Vector Machine (SVM)]]s do not scale well to large datasets, but using an approximate kernel map it is possible to use much more efficient linear [[Support Vector Machine (SVM)]]s. [http://scikit-learn.org/stable/modules/kernel_approximation.html Kernel Approximation | Scikit-Learn]
 
  
 
+
<youtube>Y1TBFuj-8iw</youtube>
http://upload.wikimedia.org/wikipedia/commons/thumb/c/cc/Kernel_trick_idea.svg/750px-Kernel_trick_idea.svg.png
+
<youtube>RPjPLlGefzw</youtube>
 
+
<youtube>scMntW3s-Wk</youtube>
 
+
<youtube>yBwpo-L80Mc</youtube>
<youtube>mTyT-oHoivA</youtube>
 

Latest revision as of 22:59, 5 March 2024

YouTube search... ...Google search

begins by finding a set of the nearest neighbors of each point. It then computes a set of weights for each point that best describes the point as a linear combination of its neighbors. Finally, it uses an [1]-based optimization technique to find the low-dimensional embedding of points, such that each point is still described with the same linear combination of its neighbors. LLE tends to handle non-uniform sample densities poorly because there is no fixed unit to prevent the weights from drifting as various regions differ in sample densities. LLE has no internal model. LLE was presented at approximately the same time as Isomap. It has several advantages over Isomap, including faster optimization when implemented to take advantage of sparse matrix algorithms, and better results with many problems

Steps-of-locally-linear-embedding-algorithm.png