Transformer
Revision as of 11:14, 25 October 2018 by BPeat (talk | contribs) (BPeat moved page Attention Model to Attention Mechanism/Model without leaving a redirect)
- Sequence to Sequence (Seq2Seq)
- Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)
- Autoencoders / Encoder-Decoders
- Natural Language Processing (NLP), Natural Language Inference (NLI) and Recognizing Textual Entailment (RTE)
“Attend” to specific parts of the input (an image or text) in sequence, one after another. By relying on a sequence of glances, they capture (visual) structure, can be contrasted with other (machine vision) techniques that process a whole input e.g. image in a single, forward pass.