Difference between revisions of "Embedding"
| Line 5: | Line 5: | ||
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | ||
}} | }} | ||
| − | [http://www.youtube.com/results?search_query= | + | [http://www.youtube.com/results?search_query=Embedding+machine+learning YouTube search...] |
| − | [http://www.google.com/search?q= | + | [http://www.google.com/search?q=Embedding+machine+learning ...Google search] |
* [[AI Solver]] | * [[AI Solver]] | ||
Revision as of 08:08, 5 April 2020
YouTube search... ...Google search
- AI Solver
- ...find outliers
- Dimensional Reduction
- Local Linear Embedding (LLE)
- T-Distributed Stochastic Neighbor Embedding (t-SNE)
Embedding...
- projecting an input into another more convenient representation space. For example we can project (embed) faces into a space in which face matching can be more reliable. | Chomba Bupe
- a mapping of a discrete — categorical — variable to a vector of continuous numbers. In the context of neural networks, embeddings are low-dimensional, learned continuous vector representations of discrete variables. Neural network embeddings are useful because they can reduce the dimensionality of categorical variables and meaningfully represent categories in the transformed space. Neural Network Embeddings Explained | Will Koehrsen - Towards Data Science
Embeddings have 3 primary purposes:
- Finding nearest neighbors in the embedding space. These can be used to make recommendations based on user interests or cluster categories.
- As input to a machine learning model for a supervised task.
- For visualization of concepts and relations between categories.