Difference between revisions of "Transformer-XL"
| Line 3: | Line 3: | ||
* [http://medium.com/dair-ai/a-light-introduction-to-transformer-xl-be5737feb13 A Light Introduction to Transformer-XL | Elvis - Medium] | * [http://medium.com/dair-ai/a-light-introduction-to-transformer-xl-be5737feb13 A Light Introduction to Transformer-XL | Elvis - Medium] | ||
| − | |||
* [[Natural Language Processing (NLP)]] | * [[Natural Language Processing (NLP)]] | ||
* [[Memory Networks]] | * [[Memory Networks]] | ||
| − | |||
* [[Autoencoder (AE) / Encoder-Decoder]] | * [[Autoencoder (AE) / Encoder-Decoder]] | ||
Revision as of 16:14, 19 January 2019
YouTube search... ...Google search
- A Light Introduction to Transformer-XL | Elvis - Medium
- Natural Language Processing (NLP)
- Memory Networks
- Autoencoder (AE) / Encoder-Decoder
Combines the two leading architectures for language modeling:
- Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Recurrent Neural Network (RNN) to handles the input tokens — words or characters — one by one to learn the relationship between them
- Attention Mechanism/Model - Transformer Model to receive a segment of tokens and learns the dependencies between at once them using an attention mechanism.