Difference between revisions of "Continuous Bag-of-Words (CBoW)"

From
Jump to: navigation, search
m (Text replacement - "http:" to "https:")
m
 
(One intermediate revision by the same user not shown)
Line 9: Line 9:
  
 
* [[Bag-of-Words (BoW)]]
 
* [[Bag-of-Words (BoW)]]
* [[Natural Language Processing (NLP)]]
+
* [[Large Language Model (LLM)]] ... [[Natural Language Processing (NLP)]]  ...[[Natural Language Generation (NLG)|Generation]] ... [[Natural Language Classification (NLC)|Classification]] ...  [[Natural Language Processing (NLP)#Natural Language Understanding (NLU)|Understanding]] ... [[Language Translation|Translation]] ... [[Natural Language Tools & Services|Tools & Services]]
 
* [[Word2Vec]]
 
* [[Word2Vec]]
 
* [[Skip-Gram]]
 
* [[Skip-Gram]]
  
The CBOW model architecture tries to predict the current target word (the center word) based on the source context words (surrounding words). Considering a simple sentence, “the quick brown fox jumps over the lazy dog”, this can be pairs of (context_window, target_word) where if we consider a context window of size 2, we have examples like ([quick, fox], brown), ([the, brown], quick), ([the, dog], lazy) and so on. Thus the model tries to predict the target_word based on the context_window words. [https://towardsdatascience.com/understanding-feature-engineering-part-4-deep-learning-methods-for-text-data-96c44370bbfa A hands-on intuitive approach to Deep Learning Methods for Text Data — Word2Vec, GloVe and FastText | Dipanjan Sarkar - Towards Data Science]
+
The CBOW model architecture tries to predict the current target word (the center word) based on the source [[context]] words (surrounding words). Considering a simple sentence, “the quick brown fox jumps over the lazy dog”, this can be pairs of ([[context]]_window, target_word) where if we consider a [[context]] window of size 2, we have examples like ([quick, fox], brown), ([the, brown], quick), ([the, dog], lazy) and so on. Thus the model tries to predict the target_word based on the [[context]]_window words. [https://towardsdatascience.com/understanding-feature-engineering-part-4-deep-learning-methods-for-text-data-96c44370bbfa A hands-on intuitive approach to Deep Learning Methods for Text Data — Word2Vec, GloVe and FastText | Dipanjan Sarkar - Towards Data Science]
  
 
https://miro.medium.com/max/542/1*d66FyqIMWtDCtOuJ_GcqAg.png
 
https://miro.medium.com/max/542/1*d66FyqIMWtDCtOuJ_GcqAg.png

Latest revision as of 20:37, 17 May 2023

YouTube search... ...Google search

The CBOW model architecture tries to predict the current target word (the center word) based on the source context words (surrounding words). Considering a simple sentence, “the quick brown fox jumps over the lazy dog”, this can be pairs of (context_window, target_word) where if we consider a context window of size 2, we have examples like ([quick, fox], brown), ([the, brown], quick), ([the, dog], lazy) and so on. Thus the model tries to predict the target_word based on the context_window words. A hands-on intuitive approach to Deep Learning Methods for Text Data — Word2Vec, GloVe and FastText | Dipanjan Sarkar - Towards Data Science

1*d66FyqIMWtDCtOuJ_GcqAg.png