Difference between revisions of "Transformer"

From
Jump to: navigation, search
Line 5: Line 5:
 
*[[Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)]]
 
*[[Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)]]
 
*[[Autoencoders / Encoder-Decoders]]
 
*[[Autoencoders / Encoder-Decoders]]
*[[Natural Language Processing (NLP), Natural Language Inference (NLI) and Recognizing Textual Entailment (RTE)]]
+
*[[Natural Language Processing (NLP)]]
  
 
“Attend” to specific parts of the input (an image or text) in sequence, one after another. By relying on a sequence of glances, they capture (visual) structure, can be contrasted with other (machine vision) techniques that process a whole input e.g. image in a single, forward pass.
 
“Attend” to specific parts of the input (an image or text) in sequence, one after another. By relying on a sequence of glances, they capture (visual) structure, can be contrasted with other (machine vision) techniques that process a whole input e.g. image in a single, forward pass.

Revision as of 06:53, 5 January 2019

YouTube search... ...Google search

“Attend” to specific parts of the input (an image or text) in sequence, one after another. By relying on a sequence of glances, they capture (visual) structure, can be contrasted with other (machine vision) techniques that process a whole input e.g. image in a single, forward pass.