Difference between revisions of "Embedding"

From
Jump to: navigation, search
Line 20: Line 20:
 
* projecting an input into another more convenient representation space. For example we can project (embed) faces into a space in which face matching can be more reliable. | [http://www.quora.com/profile/Chomba-Bupe Chomba Bupe]
 
* projecting an input into another more convenient representation space. For example we can project (embed) faces into a space in which face matching can be more reliable. | [http://www.quora.com/profile/Chomba-Bupe Chomba Bupe]
 
* a mapping of a discrete — categorical — variable to a vector of continuous numbers. In the context of neural networks, embeddings are low-dimensional, learned continuous vector representations of discrete variables. Neural network embeddings are useful because they can reduce the dimensionality of categorical variables and meaningfully represent categories in the transformed space. [http://towardsdatascience.com/neural-network-embeddings-explained-4d028e6f0526 Neural Network Embeddings Explained | Will Koehrsen - Towards Data Science]
 
* a mapping of a discrete — categorical — variable to a vector of continuous numbers. In the context of neural networks, embeddings are low-dimensional, learned continuous vector representations of discrete variables. Neural network embeddings are useful because they can reduce the dimensionality of categorical variables and meaningfully represent categories in the transformed space. [http://towardsdatascience.com/neural-network-embeddings-explained-4d028e6f0526 Neural Network Embeddings Explained | Will Koehrsen - Towards Data Science]
 +
* a relatively low-dimensional space into which you can translate high-dimensional vectors. Embeddings make it easier to do machine learning on large inputs like sparse vectors representing words. Ideally, an embedding captures some of the semantics of the input by placing semantically similar inputs close together in the embedding space. An embedding can be learned and reused across models. [http://developers.google.com/machine-learning/crash-course/embeddings/video-lecture Embeddings | Machine Learning Crash Course]
  
 
Embeddings have 3 primary purposes:
 
Embeddings have 3 primary purposes:

Revision as of 08:34, 5 April 2020

YouTube search... ...Google search

Types:


Embedding...

  • projecting an input into another more convenient representation space. For example we can project (embed) faces into a space in which face matching can be more reliable. | Chomba Bupe
  • a mapping of a discrete — categorical — variable to a vector of continuous numbers. In the context of neural networks, embeddings are low-dimensional, learned continuous vector representations of discrete variables. Neural network embeddings are useful because they can reduce the dimensionality of categorical variables and meaningfully represent categories in the transformed space. Neural Network Embeddings Explained | Will Koehrsen - Towards Data Science
  • a relatively low-dimensional space into which you can translate high-dimensional vectors. Embeddings make it easier to do machine learning on large inputs like sparse vectors representing words. Ideally, an embedding captures some of the semantics of the input by placing semantically similar inputs close together in the embedding space. An embedding can be learned and reused across models. Embeddings | Machine Learning Crash Course

Embeddings have 3 primary purposes:

  1. Finding nearest neighbors in the embedding space. These can be used to make recommendations based on user interests or cluster categories.
  2. As input to a machine learning model for a supervised task.
  3. For visualization of concepts and relations between categories.