Difference between revisions of "Transformer"

From
Jump to: navigation, search
m (BPeat moved page Attention Model to Attention Mechanism/Model without leaving a redirect)
Line 1: Line 1:
 
[http://www.youtube.com/results?search_query=attention+model+ai+deep+learning+model YouTube search...]
 
[http://www.youtube.com/results?search_query=attention+model+ai+deep+learning+model YouTube search...]
 +
[http://www.google.com/search?q=attention+model+deep+machine+learning+ML ...Google search]
  
 
*[[Sequence to Sequence (Seq2Seq)]]
 
*[[Sequence to Sequence (Seq2Seq)]]

Revision as of 17:46, 12 December 2018

YouTube search... ...Google search

“Attend” to specific parts of the input (an image or text) in sequence, one after another. By relying on a sequence of glances, they capture (visual) structure, can be contrasted with other (machine vision) techniques that process a whole input e.g. image in a single, forward pass.