Difference between revisions of "Transformer"

From
Jump to: navigation, search
Line 6: Line 6:
 
* [[Autoencoders / Encoder-Decoders]]
 
* [[Autoencoders / Encoder-Decoders]]
 
* [[Natural Language Processing (NLP)]]
 
* [[Natural Language Processing (NLP)]]
* [http://skymind.ai/wiki/attention-mechanism-memory-network A Beginner's Guide to Attention Mechanisms and Memory Networks | Skymind]
+
* [[Memory Networks]]
  
  
“Attend” to specific parts of the input (an image or text) in sequence, one after another. By relying on a sequence of glances, they capture (visual) structure, can be contrasted with other (machine vision) techniques that process a whole input e.g. image in a single, forward pass.
+
Attention mechanisms in neural networks are about memory access. That’s the first thing to remember about attention: it’s something of a misnomer. [http://skymind.ai/wiki/attention-mechanism-memory-network A Beginner's Guide to Attention Mechanisms and Memory Networks | Skymind]
  
  
 
http://skymind.ai/images/wiki/attention_mechanism.png
 
http://skymind.ai/images/wiki/attention_mechanism.png
http://skymind.ai/images/wiki/attention_model.png
+
 
  
 
<youtube>W2rWgXJBZhU</youtube>
 
<youtube>W2rWgXJBZhU</youtube>

Revision as of 10:48, 9 January 2019