Difference between revisions of "Bidirectional Encoder Representations from Transformers (BERT)"

From
Jump to: navigation, search
m
m
Line 26: Line 26:
 
** [[Recurrent Neural Network (RNN)]]   
 
** [[Recurrent Neural Network (RNN)]]   
 
** [[Long Short-Term Memory (LSTM)]]
 
** [[Long Short-Term Memory (LSTM)]]
** [[Transformer]]
 
** [[Generative Pre-trained Transformer (GPT)]]
 
 
** [[Bidirectional Encoder Representations from Transformers (BERT)]]  ... a better model, but less investment than the larger [[OpenAI]] organization
 
** [[Bidirectional Encoder Representations from Transformers (BERT)]]  ... a better model, but less investment than the larger [[OpenAI]] organization
** [[ChatGPT]] | [[OpenAI]]
+
** [[ChatGPT]] | [[OpenAI]]:
** [[Transformer-XL]]
+
*** [[Transformer]] / [[Attention]] Mechanism
 +
*** [[Generative Pre-trained Transformer (GPT)]]
 +
*** [[Reinforcement Learning (RL) from Human Feedback (RLHF)]]
 +
*** [[Supervised]] Learning
 +
*** [[Proximal Policy Optimization (PPO)]]]
 +
* [[Transformer-XL]]
 
* [http://venturebeat.com/2019/05/16/microsoft-makes-googles-bert-nlp-model-better/ Microsoft makes Google’s BERT NLP model better | Khari Johnson - VentureBeat]  
 
* [http://venturebeat.com/2019/05/16/microsoft-makes-googles-bert-nlp-model-better/ Microsoft makes Google’s BERT NLP model better | Khari Johnson - VentureBeat]  
 
* [[Watch me Build a Finance Startup]] | [[Creatives#Siraj Raval|Siraj Raval]]  
 
* [[Watch me Build a Finance Startup]] | [[Creatives#Siraj Raval|Siraj Raval]]  

Revision as of 01:40, 12 February 2023

Youtube search... ...Google search





BERT Research | Chris McCormick