Difference between revisions of "Hugging Face"
m (→HuggingGPT) |
m (→HuggingGPT) |
||
Line 30: | Line 30: | ||
HuggingGPT is a framework that leverages large language models (LLMs) such as ChatGPT to connect various AI models in machine learning communities like Hugging Face to solve AI tasks. Solving complicated AI tasks with different domains and modalities is a key step toward advanced artificial intelligence. While there are abundant AI models available for different domains and modalities, they cannot handle complicated AI tasks. Considering large language models (LLMs) have exhibited exceptional ability in language understanding, generation, interaction, and reasoning, we advocate that LLMs could act as a controller to manage existing AI models to solve complicated AI tasks and language could be a generic interface to empower this. Based on this philosophy, we present HuggingGPT, a framework that leverages LLMs (e.g., ChatGPT) to connect various AI models in machine learning communities (e.g., Hugging Face) to solve AI tasks. Specifically, we use ChatGPT to conduct task planning when receiving a user request, select models according to their function descriptions available in Hugging Face, execute each subtask with the selected AI model, and summarize the response according to the execution results. The workflow of this system consists of four stages: | HuggingGPT is a framework that leverages large language models (LLMs) such as ChatGPT to connect various AI models in machine learning communities like Hugging Face to solve AI tasks. Solving complicated AI tasks with different domains and modalities is a key step toward advanced artificial intelligence. While there are abundant AI models available for different domains and modalities, they cannot handle complicated AI tasks. Considering large language models (LLMs) have exhibited exceptional ability in language understanding, generation, interaction, and reasoning, we advocate that LLMs could act as a controller to manage existing AI models to solve complicated AI tasks and language could be a generic interface to empower this. Based on this philosophy, we present HuggingGPT, a framework that leverages LLMs (e.g., ChatGPT) to connect various AI models in machine learning communities (e.g., Hugging Face) to solve AI tasks. Specifically, we use ChatGPT to conduct task planning when receiving a user request, select models according to their function descriptions available in Hugging Face, execute each subtask with the selected AI model, and summarize the response according to the execution results. The workflow of this system consists of four stages: | ||
− | # Task Planning | + | # Task Planning, the goal is divided into sub-tasks using LLMs such as ChatGPT |
− | # Model Selection | + | # Model Selection, ChatGPT selects expert models that are hosted on Hugging Face based on model descriptions |
− | # Task Execution | + | # Task Execution, each selected model is invoked and executed, and the results are returned to ChatGPT |
− | # Response Generation | + | # Response Generation, ChatGPT integrates the prediction of all models and generates answers for users |
Revision as of 16:44, 9 April 2023
YouTube search... ...Google search
- Hugging Face
- Hugging Face course
- Reinforcement Learning (RL) from Human Feedback (RLHF)
- Pretrain Transformers Models in PyTorch Using Hugging Face Transformers | George Mihaila - TOPBOTS
- OpenChatKit | TogetherCompute ... The first open-source ChatGPT alternative released; a 20B chat-GPT model under the Apache-2.0 license, which is available for free on Hugging Face.
HuggingGPT
- HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace | Y. Shen, K. Song, X. Tan, D. Li, W. Lu, & Y. Zhuang
- Assistants ... Hybrid Assistants ... Agents ... Negotiation ... HuggingGPT ... LangChain
- HuggingGPT ... A Framework That Leverages LLMs to Connect Various AI Models in Machine Learning Communities Hugging Face to Solve AI Tasks
HuggingGPT is a framework that leverages large language models (LLMs) such as ChatGPT to connect various AI models in machine learning communities like Hugging Face to solve AI tasks. Solving complicated AI tasks with different domains and modalities is a key step toward advanced artificial intelligence. While there are abundant AI models available for different domains and modalities, they cannot handle complicated AI tasks. Considering large language models (LLMs) have exhibited exceptional ability in language understanding, generation, interaction, and reasoning, we advocate that LLMs could act as a controller to manage existing AI models to solve complicated AI tasks and language could be a generic interface to empower this. Based on this philosophy, we present HuggingGPT, a framework that leverages LLMs (e.g., ChatGPT) to connect various AI models in machine learning communities (e.g., Hugging Face) to solve AI tasks. Specifically, we use ChatGPT to conduct task planning when receiving a user request, select models according to their function descriptions available in Hugging Face, execute each subtask with the selected AI model, and summarize the response according to the execution results. The workflow of this system consists of four stages:
- Task Planning, the goal is divided into sub-tasks using LLMs such as ChatGPT
- Model Selection, ChatGPT selects expert models that are hosted on Hugging Face based on model descriptions
- Task Execution, each selected model is invoked and executed, and the results are returned to ChatGPT
- Response Generation, ChatGPT integrates the prediction of all models and generates answers for users
Hugging Face NLP Library - Open Parallel Corpus (OPUS)
- Hugging Face dives into machine translation with release of 1,000 models | Khari Johnson - VentureBeat
- Build Your Own Machine Translation Service with Transformers Using the latest Helsinki NLP models available in the Transformers library to create a standardized machine translation service | Kyle Gallatin - Towards Data Science
OPUS is a growing collection of translated texts from the web. In the OPUS project we try to convert and align free online data, to add linguistic annotation, and to provide the community with a publicly available parallel corpus. OPUS is a project undertaken by the University of Helsinki and global partners to gather and open-source a wide variety of language data sets. OPUS is based on open source products and the corpus is also delivered as an open content package. We used several tools to compile the current collection. All pre-processing is done automatically. No manual corrections have been carried out. The OPUS collection is growing! ... OPUS the open parallel corpus