Difference between revisions of "Bidirectional Encoder Representations from Transformers (BERT)"
| Line 14: | Line 14: | ||
* [[Natural Language Processing (NLP)]] | * [[Natural Language Processing (NLP)]] | ||
* [http://venturebeat.com/2019/05/16/microsoft-makes-googles-bert-nlp-model-better/ Microsoft makes Google’s BERT NLP model better | Khari Johnson - VentureBeat] | * [http://venturebeat.com/2019/05/16/microsoft-makes-googles-bert-nlp-model-better/ Microsoft makes Google’s BERT NLP model better | Khari Johnson - VentureBeat] | ||
| − | * [[ | + | * [[Watch me Build a Finance Startup]] | Siraj Raval |
* [[Google]] | * [[Google]] | ||
Revision as of 06:49, 18 June 2019
Youtube search... Attention Mechanism/Model - Transformer Model ...Google search
- Google's BERT - built on ideas from ULMFiT, ELMo, and OpenAI
- Transformer-XL
- Natural Language Processing (NLP)
- Microsoft makes Google’s BERT NLP model better | Khari Johnson - VentureBeat
- Watch me Build a Finance Startup | Siraj Raval