Difference between revisions of "Sequence to Sequence (Seq2Seq)"
| Line 4: | Line 4: | ||
*[[Autoencoders / Encoder-Decoders]] | *[[Autoencoders / Encoder-Decoders]] | ||
*[[Attention Models]] | *[[Attention Models]] | ||
| − | *[[Natural Language Inference (NLI) and Recognizing Textual Entailment (RTE)]] | + | *[[Natural Language Processing (NLP), Natural Language Inference (NLI) and Recognizing Textual Entailment (RTE)]] |
*[[(Speech to) Text to Process to Text (to Speech) - Chatbot, Virtual Assistance]] | *[[(Speech to) Text to Process to Text (to Speech) - Chatbot, Virtual Assistance]] | ||
Revision as of 09:23, 26 August 2018
- Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)
- Autoencoders / Encoder-Decoders
- Attention Models
- Natural Language Processing (NLP), Natural Language Inference (NLI) and Recognizing Textual Entailment (RTE)
- (Speech to) Text to Process to Text (to Speech) - Chatbot, Virtual Assistance