Sequence to Sequence (Seq2Seq)

From
Jump to: navigation, search

YouTube search... ...Google search

a general-purpose encoder-decoder that can be used for machine translation, text summarization, conversational modeling, image captioning, interpreting dialects of software code, and more.

nmt-model-fast.gif

Seq2seq | GitHub

We essentially have two different recurrent neural networks tied together here — the encoder RNN (bottom left boxes) listens to the input tokens until it gets a special <DONE> token, and then the decoder RNN (top right boxes) takes over and starts generating tokens, also finishing with its own <DONE> token. The encoder RNN evolves its internal state (depicted by light blue changing to dark blue while the English sentence tokens come in), and then once the <DONE> token arrives, we take the final encoder state (the dark blue box) and pass it, unchanged and repeatedly, into the decoder RNN along with every single generated German token. The decoder RNN also has its own dynamic internal state, going from light red to dark red. Voila! Variable-length input, variable-length output, from a fixed-size architecture. seq2seq: the clown car of deep learning | Dev Nag - Medium


1*yG2htcHJF9h0sohcZbBEkg.png