Difference between revisions of "Sequence to Sequence (Seq2Seq)"
| Line 16: | Line 16: | ||
* [[Attention Mechanism/Model - Transformer Model]] | * [[Attention Mechanism/Model - Transformer Model]] | ||
* [[NLP Keras model in browser with TensorFlow.js]] | * [[NLP Keras model in browser with TensorFlow.js]] | ||
| + | * [http://medium.com/@devnag/seq2seq-the-clown-car-of-deep-learning-f88e1204dac3 seq2seq: the clown car of deep learning | Dev Nag - Medium] | ||
* [[Natural Language Processing (NLP)]] | * [[Natural Language Processing (NLP)]] | ||
* [http://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/ Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) | Jay Alammar] | * [http://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/ Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) | Jay Alammar] | ||
| + | a general-purpose encoder-decoder that can be used for machine translation, text summarization, conversational modeling, image captioning, and more. | ||
http://3.bp.blogspot.com/-3Pbj_dvt0Vo/V-qe-Nl6P5I/AAAAAAAABQc/z0_6WtVWtvARtMk0i9_AtLeyyGyV6AI4wCLcB/s1600/nmt-model-fast.gif | http://3.bp.blogspot.com/-3Pbj_dvt0Vo/V-qe-Nl6P5I/AAAAAAAABQc/z0_6WtVWtvARtMk0i9_AtLeyyGyV6AI4wCLcB/s1600/nmt-model-fast.gif | ||
[http://google.github.io/seq2seq/ Seq2seq | GitHub] | [http://google.github.io/seq2seq/ Seq2seq | GitHub] | ||
| − | |||
<youtube>CMank9YmtTM</youtube> | <youtube>CMank9YmtTM</youtube> | ||
Revision as of 07:12, 30 April 2019
YouTube search... ...Google search
- Open Seq2Seq | NVIDIA
- Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)
- Autoencoder (AE) / Encoder-Decoder
- Attention Models
- Natural Language Processing (NLP)
- Assistants
- Attention Mechanism/Model - Transformer Model
- NLP Keras model in browser with TensorFlow.js
- seq2seq: the clown car of deep learning | Dev Nag - Medium
- Natural Language Processing (NLP)
- Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) | Jay Alammar
a general-purpose encoder-decoder that can be used for machine translation, text summarization, conversational modeling, image captioning, and more.