Difference between revisions of "Transformer"

From
Jump to: navigation, search
Line 2: Line 2:
 
[http://www.google.com/search?q=attention+model+deep+machine+learning+ML ...Google search]
 
[http://www.google.com/search?q=attention+model+deep+machine+learning+ML ...Google search]
  
* [[Sequence to Sequence (Seq2Seq)]]
 
 
* [[Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Recurrent Neural Network (RNN)]]
 
* [[Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Recurrent Neural Network (RNN)]]
 
* [[Natural Language Processing (NLP)]]
 
* [[Natural Language Processing (NLP)]]
 
* [[Memory Networks]]
 
* [[Memory Networks]]
 
* [[Transformer-XL]]
 
* [[Transformer-XL]]
* [http://nlp.stanford.edu/seminar/details/lkaiser.pdf Tensor2Tensor Transformers: New Deep Models for NLP | Łukasz Kaiser]
+
* Tensor2Tensor (T2T) | Google Brain
 +
** [http://nlp.stanford.edu/seminar/details/lkaiser.pdf Tensor2Tensor Transformers: New Deep Models for NLP | Łukasz Kaiser]
 +
** [http://github.com/tensorflow/tensor2tensor/blob/master/docs/walkthrough.md Tensor2Tensor | GitHub]
 +
** [http://github.com/tensorflow/tensor2tensor Tensor2Tensor Library | GitHub]
 
* [http://jalammar.github.io/illustrated-transformer/ The Illustrated Transformer | Jay Alammar]
 
* [http://jalammar.github.io/illustrated-transformer/ The Illustrated Transformer | Jay Alammar]
 +
* [[Sequence to Sequence (Seq2Seq)]]
  
 
Attention mechanisms in neural networks are about memory access. That’s the first thing to remember about attention: it’s something of a misnomer. [http://skymind.ai/wiki/attention-mechanism-memory-network A Beginner's Guide to Attention Mechanisms and Memory Networks | Skymind]
 
Attention mechanisms in neural networks are about memory access. That’s the first thing to remember about attention: it’s something of a misnomer. [http://skymind.ai/wiki/attention-mechanism-memory-network A Beginner's Guide to Attention Mechanisms and Memory Networks | Skymind]

Revision as of 19:02, 19 January 2019

YouTube search... ...Google search

Attention mechanisms in neural networks are about memory access. That’s the first thing to remember about attention: it’s something of a misnomer. A Beginner's Guide to Attention Mechanisms and Memory Networks | Skymind

3 ways of Attention:

  1. Autoencoder (AE) / Encoder-Decoder
  2. Encoder Self-Attention
  3. MaskedDecoder Self-Attention

Transformer Model - The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an Autoencoder (AE) / Encoder-Decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Attention Is All You Need | A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, L. Kaiser, and I. Polosukhin


attention_mechanism.png


Attention Is All You Need

The dominant sequence transduction models are based on complex recurrent (Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Recurrent Neural Network (RNN)) or (Deep) Convolutional Neural Network (DCNN/CNN) in an encoder-decoder (Autoencoder (AE) / Encoder-Decoder} configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Attention Is All You Need | A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, L. Kaiser, and I. Polosukhin - Google