Difference between revisions of "Transformer-XL"
m |
|||
| Line 15: | Line 15: | ||
* [[Memory Networks]] | * [[Memory Networks]] | ||
* [[Autoencoder (AE) / Encoder-Decoder]] | * [[Autoencoder (AE) / Encoder-Decoder]] | ||
| − | + | * [[Attention]] Mechanism ...[[Transformer]] Model ...[[Generative Pre-trained Transformer (GPT)]] | |
Revision as of 10:20, 19 March 2023
YouTube search... ...Google search
- Bidirectional Encoder Representations from Transformers (BERT)
- A Light Introduction to Transformer-XL | Elvis - Medium
- Transformer-XL Explained: Combining Transformers and RNNs into a State-of-the-art Language Model | Rani Horev - Towards Data Science
- Transformer-XL: Language Modeling with Longer-Term Dependency | Z. Dai, Z. Yang, Y. Yang, W.W. Cohen, J. Carbonell, Quoc V. Le, ad R. Salakhutdinov
- Natural Language Processing (NLP)
- Memory Networks
- Autoencoder (AE) / Encoder-Decoder
- Attention Mechanism ...Transformer Model ...Generative Pre-trained Transformer (GPT)
Combines the two leading architectures for language modeling:
- Recurrent Neural Network (RNN) to handles the input tokens — words or characters — one by one to learn the relationship between them
- Attention Mechanism/Transformer Model to receive a segment of tokens and learns the dependencies between at once them using an attention mechanism.