Difference between revisions of "Transformer"

From
Jump to: navigation, search
Line 4: Line 4:
 
*[[Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)]]
 
*[[Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)]]
 
*[[Autoencoders / Encoder-Decoders]]
 
*[[Autoencoders / Encoder-Decoders]]
*[[Natural Language Inference (NLI) and Recognizing Textual Entailment (RTE)]]
+
*[[Natural Language Processing (NLP), Natural Language Inference (NLI) and Recognizing Textual Entailment (RTE)]]
  
 
“Attend” to specific parts of the input (an image or text) in sequence, one after another. By relying on a sequence of glances, they capture (visual) structure, can be contrasted with other (machine vision) techniques that process a whole input e.g. image in a single, forward pass.
 
“Attend” to specific parts of the input (an image or text) in sequence, one after another. By relying on a sequence of glances, they capture (visual) structure, can be contrasted with other (machine vision) techniques that process a whole input e.g. image in a single, forward pass.

Revision as of 08:23, 26 August 2018

YouTube search...

“Attend” to specific parts of the input (an image or text) in sequence, one after another. By relying on a sequence of glances, they capture (visual) structure, can be contrasted with other (machine vision) techniques that process a whole input e.g. image in a single, forward pass.