Difference between revisions of "Text Transfer Learning"
m |
m |
||
| Line 12: | Line 12: | ||
* [[Transfer Learning]] | * [[Transfer Learning]] | ||
* [http://pdfs.semanticscholar.org/1bb2/39731589f3114a3fe5b35e42a635b5eacb38.pdf Transfer Learning for Text Mining | Weike Pan, Erheng Zhong, and Qiang Yang] | * [http://pdfs.semanticscholar.org/1bb2/39731589f3114a3fe5b35e42a635b5eacb38.pdf Transfer Learning for Text Mining | Weike Pan, Erheng Zhong, and Qiang Yang] | ||
| − | * [[Natural Language Processing (NLP)]] | + | * [[Large Language Model (LLM)]] ... [[Natural Language Processing (NLP)]] ...[[Natural Language Generation (NLG)|Generation]] ... [[Natural Language Classification (NLC)|Classification]] ... [[Natural Language Processing (NLP)#Natural Language Understanding (NLU)|Understanding]] ... [[Language Translation|Translation]] ... [[Natural Language Tools & Services|Tools & Services]] |
* [http://venturebeat.com/2019/10/24/google-achieves-state-of-the-art-nlp-performance-with-an-enormous-language-model-and-data-set/ Google achieves state-of-the-art NLP performance with an enormous language model and data set | Kyle Wiggers - Venture Beat] researchers at Google developed a new data set — Colossal Clean Crawled Corpus — and a unified framework and model dubbed [http://arxiv.org/pdf/1910.10683.pdf Text-to-Text Transformer] that converts language problems into a text-to-text format. Colossal Clean Crawled Corpus — were sourced from the Common Crawl project, which scrapes roughly 20 terabytes of English text from the web each month. | * [http://venturebeat.com/2019/10/24/google-achieves-state-of-the-art-nlp-performance-with-an-enormous-language-model-and-data-set/ Google achieves state-of-the-art NLP performance with an enormous language model and data set | Kyle Wiggers - Venture Beat] researchers at Google developed a new data set — Colossal Clean Crawled Corpus — and a unified framework and model dubbed [http://arxiv.org/pdf/1910.10683.pdf Text-to-Text Transformer] that converts language problems into a text-to-text format. Colossal Clean Crawled Corpus — were sourced from the Common Crawl project, which scrapes roughly 20 terabytes of English text from the web each month. | ||
* [[Generative Pre-trained Transformer (GPT)]] | * [[Generative Pre-trained Transformer (GPT)]] | ||
Revision as of 14:31, 28 April 2023
YouTube search... ...Google search
- Learning Techniques
- Image/Video Transfer Learning
- Transfer Learning
- Transfer Learning for Text Mining | Weike Pan, Erheng Zhong, and Qiang Yang
- Large Language Model (LLM) ... Natural Language Processing (NLP) ...Generation ... Classification ... Understanding ... Translation ... Tools & Services
- Google achieves state-of-the-art NLP performance with an enormous language model and data set | Kyle Wiggers - Venture Beat researchers at Google developed a new data set — Colossal Clean Crawled Corpus — and a unified framework and model dubbed Text-to-Text Transformer that converts language problems into a text-to-text format. Colossal Clean Crawled Corpus — were sourced from the Common Crawl project, which scrapes roughly 20 terabytes of English text from the web each month.
- Generative Pre-trained Transformer (GPT)
- Generative AI ... Conversational AI ... OpenAI's ChatGPT ... Perplexity ... Microsoft's Bing ... You ...Google's Bard ... Baidu's Ernie
Transfer algorithms: Bi-Directional Attention Flow (BIDAF), Document-QA (DOCQA), Reasoning Network (ReasoNet), R-NET, S-NET, and Assertion Based Question Answering (ABQA) Transfer Learning for Text using Deep Learning Virtual Machine (DLVM) | Anusua Trivedi and Wee Hyong Tok - Microsoft