Difference between revisions of "Sequence to Sequence (Seq2Seq)"
| Line 20: | Line 20: | ||
http://3.bp.blogspot.com/-3Pbj_dvt0Vo/V-qe-Nl6P5I/AAAAAAAABQc/z0_6WtVWtvARtMk0i9_AtLeyyGyV6AI4wCLcB/s1600/nmt-model-fast.gif | http://3.bp.blogspot.com/-3Pbj_dvt0Vo/V-qe-Nl6P5I/AAAAAAAABQc/z0_6WtVWtvARtMk0i9_AtLeyyGyV6AI4wCLcB/s1600/nmt-model-fast.gif | ||
| + | |||
| + | [http://google.github.io/seq2seq/ Seq2seq | GitHub] | ||
| + | |||
<youtube>CMank9YmtTM</youtube> | <youtube>CMank9YmtTM</youtube> | ||
Revision as of 23:03, 29 April 2019
YouTube search... ...Google search
- Open Seq2Seq | NVIDIA
- Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)
- Autoencoder (AE) / Encoder-Decoder
- Attention Models
- Natural Language Processing (NLP)
- Assistants
- Attention Mechanism/Model - Transformer Model
- Natural Language Processing (NLP)
- Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) | Jay Alammar