Aviary
YouTube ... Quora ...Google search ...Google News ...Bing News
- Large Language Model (LLM) ... Natural Language Processing (NLP) ... Generation ... Classification ... Understanding ... Translation ... Tools & Services
- A New Tool for the Open Source LLM Developer Stack: Aviary | Richard MacManus - The New Stack ... An open source LLM stack is emerging, says Anyscale’s head of engineering. Along with Ray, LangChain and Hugging Face, we can now add Aviary.
- Anyscale
Anyscale tool to help developers work with Large Language Model (LLM). Called Aviary, Anyscale describes it as the “first fully free, cloud-based infrastructure designed to help developers choose and deploy the right technologies and approach for their LLM-based applications.” Like Ray, Aviary is being released as an open source project.
Ray
Ray is an open-source distributed computing framework for scaling machine learning and Python workloads. It is developed by Anyscale, a company that provides a managed platform for Ray. With Ray, developers can scale their compute-intensive workloads from their laptop to any cloud with minimal code changes. Ray has a strong ecosystem of distributed libraries and integrations that make it easy to scale existing workloads. Anyscale offers a fully managed Ray platform that provides a seamless user experience for developers and AI teams to speed development and deploy AI/ML workloads at scale.
Ray has a wide range of use cases for scaling machine learning and Python workloads. Some common use cases include:
- Large language models (LLMs) and generative AI: Ray provides a distributed compute framework for scaling these models, allowing developers to train and deploy models faster and more efficiently. With specialized libraries for data streaming, training, fine-tuning, hyperparameter tuning, and serving, Ray simplifies the process of developing and deploying large-scale AI models.
- Batch Inference: Ray can be used for batch inference, which is the process of generating model predictions on a large “batch” of input data. Ray for batch inference works with any cloud provider and ML framework, and is fast and cheap for modern deep learning applications. It scales from single machines to large clusters with minimal code changes.
- Many Model Training: Many model training is common in ML use cases such as time series forecasting, which require fitting of models on multiple data batches corresponding to locations, products, etc. The focus is on training many models on subsets of a dataset. This is in contrast to training a single model on the entire dataset. When any given model you want to train can fit on a single GPU, Ray can assign each training run to a separate Ray Task.