Difference between revisions of "XLNet"
Line 12: | Line 12: | ||
* [[Natural Language Processing (NLP)]] | * [[Natural Language Processing (NLP)]] | ||
− | With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from [[Transformer-XL]], the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking. [http://arxiv.org/abs/1906.08237 : Generalized Autoregressive Pretraining for Language Understanding | Z. Yang, Z. Dai, Y Yang, J. Carbonell, R. Salakhutdinov, and Q Le] | + | With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like [[Bidirectional Encoder Representations from Transformers (BERT)]] achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, [[Bidirectional Encoder Representations from Transformers (BERT)]] neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of [[Bidirectional Encoder Representations from Transformers (BERT)]] thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from [[Transformer-XL]], the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms [[Bidirectional Encoder Representations from Transformers (BERT)]] on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking. [http://arxiv.org/abs/1906.08237 : Generalized Autoregressive Pretraining for Language Understanding | Z. Yang, Z. Dai, Y Yang, J. Carbonell, R. Salakhutdinov, and Q Le] |
<youtube>bDxFvr1gpSU</youtube> | <youtube>bDxFvr1gpSU</youtube> |
Revision as of 08:48, 1 July 2019
Youtube search... | ...Google search
- What is XLNet and why It outperforms BERT | BrambleXu - Towards Data Science
- Natural Language Processing (NLP)
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like Bidirectional Encoder Representations from Transformers (BERT) achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, Bidirectional Encoder Representations from Transformers (BERT) neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of Bidirectional Encoder Representations from Transformers (BERT) thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms Bidirectional Encoder Representations from Transformers (BERT) on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking. : Generalized Autoregressive Pretraining for Language Understanding | Z. Yang, Z. Dai, Y Yang, J. Carbonell, R. Salakhutdinov, and Q Le