Difference between revisions of "Sequence to Sequence (Seq2Seq)"
| Line 8: | Line 8: | ||
* [[Natural Language Processing (NLP)]] | * [[Natural Language Processing (NLP)]] | ||
* [[Assistants]] | * [[Assistants]] | ||
| + | * [[Attention Mechanism/Model - Transformer Model]] | ||
* [http://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/ Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) | Jay Alammar] | * [http://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/ Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) | Jay Alammar] | ||
Revision as of 19:56, 19 January 2019
YouTube search... ...Google search
- Open Seq2Seq | NVIDIA
- Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)
- Autoencoder (AE) / Encoder-Decoder
- Attention Models
- Natural Language Processing (NLP)
- Assistants
- Attention Mechanism/Model - Transformer Model
- Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) | Jay Alammar