Difference between revisions of "Bidirectional Encoder Representations from Transformers (BERT)"

From
Jump to: navigation, search
m
m
Line 22: Line 22:
 
* [[Attention]] Mechanism/[[Transformer]] Model
 
* [[Attention]] Mechanism/[[Transformer]] Model
 
** [[Generative Pre-trained Transformer (GPT)]]2/3
 
** [[Generative Pre-trained Transformer (GPT)]]2/3
 +
* [https://www.technologyreview.com/2023/02/08/1068068/chatgpt-is-everywhere-heres-where-it-came-from/ ChatGPT is everywhere. Here’s where it came from | Will Douglas Heaven - MIT Technology Review]
 +
** [[Sequence to Sequence (Seq2Seq)]]
 +
** [[Recurrent Neural Network (RNN)]] 
 +
** [[Long Short-Term Memory (LSTM)]]
 +
** [[Transformer]]
 +
** [[Generative Pre-trained Transformer (GPT)]]
 +
** [[Bidirectional Encoder Representations from Transformers (BERT)]]  ... a better model, but less investment than the larger [[OpenAI]] organization
 +
** [[ChatGPT]] | [[OpenAI]]
 
** [[Transformer-XL]]
 
** [[Transformer-XL]]
 
* [http://venturebeat.com/2019/05/16/microsoft-makes-googles-bert-nlp-model-better/ Microsoft makes Google’s BERT NLP model better | Khari Johnson - VentureBeat]  
 
* [http://venturebeat.com/2019/05/16/microsoft-makes-googles-bert-nlp-model-better/ Microsoft makes Google’s BERT NLP model better | Khari Johnson - VentureBeat]  

Revision as of 23:57, 11 February 2023

Youtube search... ...Google search





BERT Research | Chris McCormick