Difference between revisions of "Word2Vec"

From
Jump to: navigation, search
m
m
Line 20: Line 20:
 
* [http://pathmind.com/wiki/word2vec A Beginner's Guide to Word2Vec and Neural Word Embeddings | Chris Nicholson - A.I. Wiki pathmind]
 
* [http://pathmind.com/wiki/word2vec A Beginner's Guide to Word2Vec and Neural Word Embeddings | Chris Nicholson - A.I. Wiki pathmind]
 
* [http://towardsdatascience.com/introduction-to-word-embedding-and-word2vec-652d0c2060fa Introduction to Word Embedding and Word2Vec | Dhruvil Karani - Towards Data Science - Medium]
 
* [http://towardsdatascience.com/introduction-to-word-embedding-and-word2vec-652d0c2060fa Introduction to Word Embedding and Word2Vec | Dhruvil Karani - Towards Data Science - Medium]
* [http://arxiv.org/pdf/1310.4546.pdf Distributed Representations of Words and Phrases and their Compositionality | Tomas Mikolov - [[Google]]
+
* [http://arxiv.org/pdf/1310.4546.pdf Distributed Representations of Words and Phrases and their Compositionality | Tomas Mikolov -] [[Google]]
  
 
a shallow, two-layer neural networks which is trained to reconstruct linguistic contexts of words. It takes as its input a large corpus of words and produces a vector space, typically of several hundred dimensions, with each unique word in the corpus being assigned a corresponding vector in the space.
 
a shallow, two-layer neural networks which is trained to reconstruct linguistic contexts of words. It takes as its input a large corpus of words and produces a vector space, typically of several hundred dimensions, with each unique word in the corpus being assigned a corresponding vector in the space.

Revision as of 14:18, 22 August 2020