Difference between revisions of "Sequence to Sequence (Seq2Seq)"
| Line 4: | Line 4: | ||
* [http://github.com/NVIDIA/OpenSeq2Seq Open Seq2Seq | NVIDIA] | * [http://github.com/NVIDIA/OpenSeq2Seq Open Seq2Seq | NVIDIA] | ||
* [[Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)]] | * [[Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)]] | ||
| − | * [[ | + | * [[Autoencoder (AE) / Encoder-Decoder]] |
* [[Attention Models]] | * [[Attention Models]] | ||
* [[Natural Language Processing (NLP), Natural Language Inference (NLI) and Recognizing Textual Entailment (RTE)]] | * [[Natural Language Processing (NLP), Natural Language Inference (NLI) and Recognizing Textual Entailment (RTE)]] | ||
Revision as of 18:44, 12 December 2018
YouTube search... ...Google search
- Open Seq2Seq | NVIDIA
- Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)
- Autoencoder (AE) / Encoder-Decoder
- Attention Models
- Natural Language Processing (NLP), Natural Language Inference (NLI) and Recognizing Textual Entailment (RTE)
- (Speech to) Text to Process to Text (to Speech) - Chatbot, Virtual Assistance