Difference between revisions of "Foundation Models (FM)"
m |
m (→Amazon AWS) |
||
| Line 25: | Line 25: | ||
* [https://www.ai21.com/studio AI21 Labs] | * [https://www.ai21.com/studio AI21 Labs] | ||
| − | * [https://www.anthropic.com/index/introducing-claude | + | * [https://www.anthropic.com/index/introducing-claude Claude | Anthropic] |
* [https://stability.ai/blog/stability-ai-makes-its-stable-diffusion-models-available-on-amazons-new-bedrock-service Stability AI] | * [https://stability.ai/blog/stability-ai-makes-its-stable-diffusion-models-available-on-amazons-new-bedrock-service Stability AI] | ||
* [https://aws.amazon.com/bedrock/titan/ Amazon Titan] | * [https://aws.amazon.com/bedrock/titan/ Amazon Titan] | ||
Revision as of 21:14, 14 April 2023
YouTube ... Quora ...Google search ...Google News ...Bing News
Artificial intelligence (AI) has come a long way in recent years, with the development of increasingly sophisticated models that can perform a wide range of tasks. One of the most exciting developments in this field is the emergence of foundation models. A foundation model is a “paradigm for building AI systems” in which a model trained on a large amount of unlabeled data can be adapted to many applications. This means that instead of building a separate AI model for each specific task, a single foundation model can be used for multiple tasks with minimal fine-tuning.
Why use foundation models? One of the main advantages of using foundation models is their flexibility. Because they are designed to be adapted to various downstream cognitive tasks by pre-training on broad data at scale, they can be used for a wide range of applications. Another advantage of using foundation models is their reusability. Instead of having to build a new AI model from scratch for each new task, a single foundation model can be used for multiple tasks with minimal fine-tuning. This can save time and resources and make it easier to develop and deploy AI systems.
How to use foundation models? Foundation models are trained on enormous quantities of unlabeled data through self-supervised learning. This means that the model learns by predicting missing information in the data, without the need for explicit labels. Once the foundation model has been trained, it can be used for various tasks through transfer learning. This involves adapting the model to a new task by fine-tuning it on a smaller amount of labeled data specific to that task. Some examples of foundation models include GPT-3, BERT, and DALL-E 2. These models have shown impressive capabilities in natural language processing and generation, as well as image generation. The use of foundation models has the potential to revolutionize many industries and applications. For example, they could be used to develop more sophisticated digital assistants, improve medical diagnosis, or generate new works of art.
Amazon AWS
Amazon Bedrock provides multiple foundation models designed to allow companies to customize and create their own generative AI applications for targeted use cases and commercial use. With Bedrock’s serverless experience, you can get started quickly, privately customize FMs with your own data, and easily integrate and deploy them into your applications using the AWS tools and capabilities you are familiar with (including integrations with Amazon SageMaker ML features like Experiments to test different models and Pipelines to manage your FMs at scale) without having to manage any infrastructure
The initial set of foundation models supported by the service include ones from:
Microsoft