Difference between revisions of "Transformer"

From
Jump to: navigation, search
m (BPeat moved page Attention Mechanisms to Attention Model without leaving a redirect)
Line 5: Line 5:
 
*[[Autoencoders / Encoder-Decoders]]
 
*[[Autoencoders / Encoder-Decoders]]
 
*[[Natural Language Inference (NLI) and Recognizing Textual Entailment (RTE)]]
 
*[[Natural Language Inference (NLI) and Recognizing Textual Entailment (RTE)]]
 +
 +
“Attend” to specific parts of an image in sequence, one after another. By relying on a sequence of glances, they capture visual structure, can be contrasted with other machine vision techniques that process a whole image in a single, forward pass.
  
 
<youtube>W2rWgXJBZhU</youtube>
 
<youtube>W2rWgXJBZhU</youtube>

Revision as of 21:56, 10 May 2018

YouTube search...

“Attend” to specific parts of an image in sequence, one after another. By relying on a sequence of glances, they capture visual structure, can be contrasted with other machine vision techniques that process a whole image in a single, forward pass.