Difference between revisions of "Text Transfer Learning"
(→GPT-2) |
|||
| Line 14: | Line 14: | ||
<youtube>zxJJ0T54HX8</youtube> | <youtube>zxJJ0T54HX8</youtube> | ||
<youtube>qN9hHlZKIL4</youtube> | <youtube>qN9hHlZKIL4</youtube> | ||
| − | |||
| − | |||
| − | |||
Revision as of 07:51, 13 November 2019
- Image/Video Transfer Learning
- Transfer Learning for Text Mining | Weike Pan, Erheng Zhong, and Qiang Yang
- Natural Language Processing (NLP)
- Google achieves state-of-the-art NLP performance with an enormous language model and data set | Kyle Wiggers - Venture Beat researchers at Google developed a new data set — Colossal Clean Crawled Corpus — and a unified framework and model dubbed Text-to-Text Transformer that converts language problems into a text-to-text format. Colossal Clean Crawled Corpus — were sourced from the Common Crawl project, which scrapes roughly 20 terabytes of English text from the web each month.
Transfer algorithms: Bi-Directional Attention Flow (BIDAF), Document-QA (DOCQA), Reasoning Network (ReasoNet), R-NET, S-NET, and Assertion Based Question Answering (ABQA) Transfer Learning for Text using Deep Learning Virtual Machine (DLVM) | Anusua Trivedi and Wee Hyong Tok - Microsoft