Difference between revisions of "Bidirectional Encoder Representations from Transformers (BERT)"
| Line 6: | Line 6: | ||
}} | }} | ||
[http://www.youtube.com/results?search_query=BERT+Transformer+nlp+language Youtube search...] | [http://www.youtube.com/results?search_query=BERT+Transformer+nlp+language Youtube search...] | ||
| + | [[Attention Mechanism/Model - Transformer Model]] | ||
[http://www.google.com/search?q=BERT+Transformer+nlp+language ...Google search] | [http://www.google.com/search?q=BERT+Transformer+nlp+language ...Google search] | ||
Revision as of 16:26, 17 May 2019
Youtube search... Attention Mechanism/Model - Transformer Model ...Google search
- Google's BERT - built on ideas from ULMFiT, ELMo, and OpenAI
- Transformer-XL
- Natural Language Processing (NLP)
- Microsoft makes Google’s BERT NLP model better | Khari Johnson - VentureBeat
- Siraj Raval's Watch me Build a Finance Startup