Difference between revisions of "Hugging Face"

From
Jump to: navigation, search
m (HuggingGPT)
m (HuggingGPT)
Line 31: Line 31:
 
HuggingGPT is a framework that leverages [[Large Language Model (LLM)]] such as [[ChatGPT]] to connect various AI models in machine learning communities like Hugging Face to solve AI tasks. Solving complicated AI tasks with different domains and modalities is a key step toward advanced artificial intelligence. While there are abundant AI models available for different domains and modalities, they cannot handle complicated AI tasks. Considering [[Large Language Model (LLM)]] have exhibited exceptional ability in language understanding, generation, interaction, and reasoning, we advocate that [[Large Language Model (LLM)|LLMs]] could act as a controller to manage existing AI models to solve complicated AI tasks and language could be a generic interface to empower this. Based on this philosophy, we present HuggingGPT, a framework that leverages [[Large Language Model (LLM)|LLMs]]  (e.g., [[ChatGPT]]) to connect various AI models in machine learning communities (e.g., Hugging Face) to solve AI tasks. Specifically, we use [[ChatGPT]] to conduct task planning when receiving a user request, select models according to their function descriptions available in Hugging Face, execute each subtask with the selected AI model, and summarize the response according to the execution results. The workflow of this system consists of four stages:  
 
HuggingGPT is a framework that leverages [[Large Language Model (LLM)]] such as [[ChatGPT]] to connect various AI models in machine learning communities like Hugging Face to solve AI tasks. Solving complicated AI tasks with different domains and modalities is a key step toward advanced artificial intelligence. While there are abundant AI models available for different domains and modalities, they cannot handle complicated AI tasks. Considering [[Large Language Model (LLM)]] have exhibited exceptional ability in language understanding, generation, interaction, and reasoning, we advocate that [[Large Language Model (LLM)|LLMs]] could act as a controller to manage existing AI models to solve complicated AI tasks and language could be a generic interface to empower this. Based on this philosophy, we present HuggingGPT, a framework that leverages [[Large Language Model (LLM)|LLMs]]  (e.g., [[ChatGPT]]) to connect various AI models in machine learning communities (e.g., Hugging Face) to solve AI tasks. Specifically, we use [[ChatGPT]] to conduct task planning when receiving a user request, select models according to their function descriptions available in Hugging Face, execute each subtask with the selected AI model, and summarize the response according to the execution results. The workflow of this system consists of four stages:  
  
# Task Planning, the goal is divided into sub-tasks using LLMs such as [[ChatGPT]]  
+
# Task Planning, the goal is divided into sub-tasks using [[Large Language Model (LLM)|LLMs]] such as [[ChatGPT]]  
 
# Model Selection, [[ChatGPT]] selects expert models that are hosted on Hugging Face based on model descriptions  
 
# Model Selection, [[ChatGPT]] selects expert models that are hosted on Hugging Face based on model descriptions  
 
# Task Execution, each selected model is invoked and executed, and the results are returned to [[ChatGPT]]  
 
# Task Execution, each selected model is invoked and executed, and the results are returned to [[ChatGPT]]  

Revision as of 19:46, 9 April 2023

YouTube search... ...Google search


HuggingGPT

HuggingGPT is a framework that leverages Large Language Model (LLM) such as ChatGPT to connect various AI models in machine learning communities like Hugging Face to solve AI tasks. Solving complicated AI tasks with different domains and modalities is a key step toward advanced artificial intelligence. While there are abundant AI models available for different domains and modalities, they cannot handle complicated AI tasks. Considering Large Language Model (LLM) have exhibited exceptional ability in language understanding, generation, interaction, and reasoning, we advocate that LLMs could act as a controller to manage existing AI models to solve complicated AI tasks and language could be a generic interface to empower this. Based on this philosophy, we present HuggingGPT, a framework that leverages LLMs (e.g., ChatGPT) to connect various AI models in machine learning communities (e.g., Hugging Face) to solve AI tasks. Specifically, we use ChatGPT to conduct task planning when receiving a user request, select models according to their function descriptions available in Hugging Face, execute each subtask with the selected AI model, and summarize the response according to the execution results. The workflow of this system consists of four stages:

  1. Task Planning, the goal is divided into sub-tasks using LLMs such as ChatGPT
  2. Model Selection, ChatGPT selects expert models that are hosted on Hugging Face based on model descriptions
  3. Task Execution, each selected model is invoked and executed, and the results are returned to ChatGPT
  4. Response Generation, ChatGPT integrates the prediction of all models and generates answers for users


Hugging Face NLP Library - Open Parallel Corpus (OPUS)

Hugging Face

OPUS is a growing collection of translated texts from the web. In the OPUS project we try to convert and align free online data, to add linguistic annotation, and to provide the community with a publicly available parallel corpus. OPUS is a project undertaken by the University of Helsinki and global partners to gather and open-source a wide variety of language data sets. OPUS is based on open source products and the corpus is also delivered as an open content package. We used several tools to compile the current collection. All pre-processing is done automatically. No manual corrections have been carried out. The OPUS collection is growing! ... OPUS the open parallel corpus