Difference between revisions of "ERNIE"

From
Jump to: navigation, search
Line 9: Line 9:
  
 
* [[Natural Language Processing (NLP)]]
 
* [[Natural Language Processing (NLP)]]
 +
* [http://research.baidu.com/Blog/index-view?id=121 Baidu’s Optimized ERNIE Achieves State-of-the-Art Results in Natural Language Processing Tasks | Baidu Research]
 
* [http://arxiv.org/abs/1907.12412v1 ERNIE 2.0: A Continual Pre-training Framework for Language Understanding | Y. Sun, S. Wang, Y. Li, S. Feng, H. Tian, H. Wu, H. Wang]
 
* [http://arxiv.org/abs/1907.12412v1 ERNIE 2.0: A Continual Pre-training Framework for Language Understanding | Y. Sun, S. Wang, Y. Li, S. Feng, H. Tian, H. Wu, H. Wang]
 
* [http://github.com/PaddlePaddle/ERNIE ERNIE | Y. Sun, S. Wang, Y. Li, S. Feng, H. Tian, H. Wu, H. Wang - Baidu - GitHub]
 
* [http://github.com/PaddlePaddle/ERNIE ERNIE | Y. Sun, S. Wang, Y. Li, S. Feng, H. Tian, H. Wu, H. Wang - Baidu - GitHub]

Revision as of 21:11, 21 December 2019

Youtube search... | ...Google search

Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing. Current pre-training procedures usually focus on training the model with several simple tasks to grasp the co-occurrence of words or sentences. However, besides co-occurring, there exists other valuable lexical, syntactic and semantic information in training corpora, such as named entity, semantic closeness and discourse relations. In order to extract to the fullest extent, the lexical, syntactic and semantic information from training corpora, we propose a continual pre-training framework named ERNIE 2.0 which builds and learns incrementally pre-training tasks through constant multi-task learning. Experimental results demonstrate that ERNIE 2.0 outperforms BERT and XLNet on 16 tasks including English tasks on GLUE benchmarks and several common tasks in Chinese.