Difference between revisions of "Algorithm Administration"
m |
m |
||
(18 intermediate revisions by the same user not shown) | |||
Line 20: | Line 20: | ||
[https://www.bing.com/news/search?q=ai+Data+Algorithm+Administration&qft=interval%3d%228%22 ...Bing News] | [https://www.bing.com/news/search?q=ai+Data+Algorithm+Administration&qft=interval%3d%228%22 ...Bing News] | ||
− | * [[AI Solver]] ... [[Algorithms]] ... [[Algorithm Administration|Administration]] ... [[Model Search]] ... [[Discriminative vs. Generative | + | * [[AI Solver]] ... [[Algorithms]] ... [[Algorithm Administration|Administration]] ... [[Model Search]] ... [[Discriminative vs. Generative]] ... [[Train, Validate, and Test]] |
− | * [[Development]] ... [[Notebooks]] ... [[Development#AI Pair Programming Tools|AI Pair Programming]] ... [[Codeless Options, Code Generators, Drag n' Drop|Codeless | + | * [[Development]] ... [[Notebooks]] ... [[Development#AI Pair Programming Tools|AI Pair Programming]] ... [[Codeless Options, Code Generators, Drag n' Drop|Codeless]] ... [[Hugging Face]] ... [[Algorithm Administration#AIOps/MLOps|AIOps/MLOps]] ... [[Platforms: AI/Machine Learning as a Service (AIaaS/MLaaS)|AIaaS/MLaaS]] |
* [[Natural Language Processing (NLP)#Managed Vocabularies |Managed Vocabularies]] | * [[Natural Language Processing (NLP)#Managed Vocabularies |Managed Vocabularies]] | ||
* [[Analytics]] ... [[Visualization]] ... [[Graphical Tools for Modeling AI Components|Graphical Tools]] ... [[Diagrams for Business Analysis|Diagrams]] & [[Generative AI for Business Analysis|Business Analysis]] ... [[Requirements Management|Requirements]] ... [[Loop]] ... [[Bayes]] ... [[Network Pattern]] | * [[Analytics]] ... [[Visualization]] ... [[Graphical Tools for Modeling AI Components|Graphical Tools]] ... [[Diagrams for Business Analysis|Diagrams]] & [[Generative AI for Business Analysis|Business Analysis]] ... [[Requirements Management|Requirements]] ... [[Loop]] ... [[Bayes]] ... [[Network Pattern]] | ||
Line 29: | Line 29: | ||
** [[Evaluation - Measures#Precision & Recall (Sensitivity)|Precision & Recall (Sensitivity)]] | ** [[Evaluation - Measures#Precision & Recall (Sensitivity)|Precision & Recall (Sensitivity)]] | ||
** [[Evaluation - Measures#Specificity|Specificity]] | ** [[Evaluation - Measures#Specificity|Specificity]] | ||
− | * | + | * [[Artificial General Intelligence (AGI) to Singularity]] ... [[Inside Out - Curious Optimistic Reasoning| Curious Reasoning]] ... [[Emergence]] ... [[Moonshots]] ... [[Explainable / Interpretable AI|Explainable AI]] ... [[Algorithm Administration#Automated Learning|Automated Learning]] |
* NLP [[Natural Language Processing (NLP)#Workbench / Pipeline | Workbench / Pipeline]] | * NLP [[Natural Language Processing (NLP)#Workbench / Pipeline | Workbench / Pipeline]] | ||
* [[Data Science]] ... [[Data Governance|Governance]] ... [[Data Preprocessing|Preprocessing]] ... [[Feature Exploration/Learning|Exploration]] ... [[Data Interoperability|Interoperability]] ... [[Algorithm Administration#Master Data Management (MDM)|Master Data Management (MDM)]] ... [[Bias and Variances]] ... [[Benchmarks]] ... [[Datasets]] | * [[Data Science]] ... [[Data Governance|Governance]] ... [[Data Preprocessing|Preprocessing]] ... [[Feature Exploration/Learning|Exploration]] ... [[Data Interoperability|Interoperability]] ... [[Algorithm Administration#Master Data Management (MDM)|Master Data Management (MDM)]] ... [[Bias and Variances]] ... [[Benchmarks]] ... [[Datasets]] | ||
Line 41: | Line 41: | ||
* [https://www.information-age.com/benefiting-ai-data-management-123471564/ Benefiting from AI: A different approach to data management is needed] | * [https://www.information-age.com/benefiting-ai-data-management-123471564/ Benefiting from AI: A different approach to data management is needed] | ||
* [[Libraries & Frameworks Overview]] ... [[Libraries & Frameworks]] ... [[Git - GitHub and GitLab]] ... [[Other Coding options]] | * [[Libraries & Frameworks Overview]] ... [[Libraries & Frameworks]] ... [[Git - GitHub and GitLab]] ... [[Other Coding options]] | ||
− | ** [[Writing / Publishing#Model Publishing|publishing your model]] | + | ** [[Writing/Publishing#Model Publishing|publishing your model]] |
* [https://github.com/JonTupitza/Data-Science-Process/blob/master/10-Modeling-Pipeline.ipynb Use a Pipeline to Chain PCA with a RandomForest Classifier Jupyter Notebook |] [https://github.com/jontupitza Jon Tupitza] | * [https://github.com/JonTupitza/Data-Science-Process/blob/master/10-Modeling-Pipeline.ipynb Use a Pipeline to Chain PCA with a RandomForest Classifier Jupyter Notebook |] [https://github.com/jontupitza Jon Tupitza] | ||
* [https://devblogs.microsoft.com/cesardelatorre/ml-net-model-lifecycle-with-azure-devops-ci-cd-pipelines/ ML.NET Model Lifecycle with Azure DevOps CI/CD pipelines | Cesar de la Torre - Microsoft] | * [https://devblogs.microsoft.com/cesardelatorre/ml-net-model-lifecycle-with-azure-devops-ci-cd-pipelines/ ML.NET Model Lifecycle with Azure DevOps CI/CD pipelines | Cesar de la Torre - Microsoft] | ||
Line 305: | Line 305: | ||
[https://www.google.com/search?q=~Automated+~Learning+ai ...Google search] | [https://www.google.com/search?q=~Automated+~Learning+ai ...Google search] | ||
− | * | + | * [[Artificial General Intelligence (AGI) to Singularity]] ... [[Inside Out - Curious Optimistic Reasoning| Curious Reasoning]] ... [[Emergence]] ... [[Moonshots]] ... [[Explainable / Interpretable AI|Explainable AI]] ... [[Algorithm Administration#Automated Learning|Automated Learning]] |
− | * [[Immersive Reality]] ... [[Metaverse]] ... [[Digital Twin]] ... [[Internet of Things (IoT)]] ... [[ | + | * [[Immersive Reality]] ... [[Metaverse]] ... [[Omniverse]] ... [[Transhumanism]] ... [[Religion]] |
− | + | * [[Telecommunications]] ... [[Computer Networks]] ... [[Telecommunications#5G|5G]] ... [[Satellite#Satellite Communications|Satellite Communications]] ... [[Quantum Communications]] ... [[Agents#Communication | Communication Agents]] ... [[Smart Cities]] ... [[Digital Twin]] ... [[Internet of Things (IoT)]] | |
+ | * [[Large Language Model (LLM)]] ... [[Large Language Model (LLM)#Multimodal|Multimodal]] ... [[Foundation Models (FM)]] ... [[Generative Pre-trained Transformer (GPT)|Generative Pre-trained]] ... [[Transformer]] ... [[GPT-4]] ... [[GPT-5]] ... [[Attention]] ... [[Generative Adversarial Network (GAN)|GAN]] ... [[Bidirectional Encoder Representations from Transformers (BERT)|BERT]] | ||
* [[Development]] ... [[Notebooks]] ... [[Development#AI Pair Programming Tools|AI Pair Programming]] ... [[Codeless Options, Code Generators, Drag n' Drop|Codeless, Generators, Drag n' Drop]] ... [[Algorithm Administration#AIOps/MLOps|AIOps/MLOps]] ... [[Platforms: AI/Machine Learning as a Service (AIaaS/MLaaS)|AIaaS/MLaaS]] | * [[Development]] ... [[Notebooks]] ... [[Development#AI Pair Programming Tools|AI Pair Programming]] ... [[Codeless Options, Code Generators, Drag n' Drop|Codeless, Generators, Drag n' Drop]] ... [[Algorithm Administration#AIOps/MLOps|AIOps/MLOps]] ... [[Platforms: AI/Machine Learning as a Service (AIaaS/MLaaS)|AIaaS/MLaaS]] | ||
* [[AdaNet]] | * [[AdaNet]] | ||
+ | * [[Creatives]] ... [[History of Artificial Intelligence (AI)]] ... [[Neural Network#Neural Network History|Neural Network History]] ... [[Rewriting Past, Shape our Future]] ... [[Archaeology]] ... [[Paleontology]] | ||
* [[Generative Pre-trained Transformer (GPT)#Generative Pre-trained Transformer 5 (GPT-5) | Generative Pre-trained Transformer 5 (GPT-5)]] | * [[Generative Pre-trained Transformer (GPT)#Generative Pre-trained Transformer 5 (GPT-5) | Generative Pre-trained Transformer 5 (GPT-5)]] | ||
* [https://www.technologyreview.com/s/603381/ai-software-learns-to-make-ai-software/ AI Software Learns to Make AI Software] | * [https://www.technologyreview.com/s/603381/ai-software-learns-to-make-ai-software/ AI Software Learns to Make AI Software] | ||
Line 420: | Line 422: | ||
[https://www.google.com/search?q=~AIOps+~MLOps+~devops+~secdevops+~devsecops+pipeline+toolchain+CI+CD+machine+learning+ai ...Google search] | [https://www.google.com/search?q=~AIOps+~MLOps+~devops+~secdevops+~devsecops+pipeline+toolchain+CI+CD+machine+learning+ai ...Google search] | ||
− | * [[Development]] ... [[Notebooks]] ... [[Development#AI Pair Programming Tools|AI Pair Programming]] ... [[Codeless Options, Code Generators, Drag n' Drop|Codeless | + | * [[Development]] ... [[Notebooks]] ... [[Development#AI Pair Programming Tools|AI Pair Programming]] ... [[Codeless Options, Code Generators, Drag n' Drop|Codeless]] ... [[Hugging Face]] ... [[Algorithm Administration#AIOps/MLOps|AIOps/MLOps]] ... [[Platforms: AI/Machine Learning as a Service (AIaaS/MLaaS)|AIaaS/MLaaS]] |
* [[MLflow]] | * [[MLflow]] | ||
+ | * [[ChatGPT#DevSecOps| ChatGPT integration with DevSecOps]] | ||
* [https://devops.com/?s=ai DevOps.com] | * [https://devops.com/?s=ai DevOps.com] | ||
* [https://www.forbes.com/sites/servicenow/2020/02/26/a-silver-bullet-for-cios/#53a1381e6870 A Silver Bullet For CIOs; Three ways AIOps can help IT leaders get strategic - Lisa Wolfe - Forbes] | * [https://www.forbes.com/sites/servicenow/2020/02/26/a-silver-bullet-for-cios/#53a1381e6870 A Silver Bullet For CIOs; Three ways AIOps can help IT leaders get strategic - Lisa Wolfe - Forbes] | ||
Line 567: | Line 570: | ||
<youtube>PHVtFQpAbsY</youtube> | <youtube>PHVtFQpAbsY</youtube> | ||
<b>Deep Learning Pipelines: Enabling AI in Production | <b>Deep Learning Pipelines: Enabling AI in Production | ||
− | </b><br>Deep learning has shown tremendous successes, yet it often requires a lot of effort to leverage its power. Existing deep learning frameworks require [[Writing / Publishing|writing]] a lot of code to run a model, let alone in a distributed manner. Deep Learning Pipelines is an Apache Spark Package library that makes practical deep learning simple based on the Spark MLlib Pipelines API. Leveraging Spark, Deep Learning Pipelines scales out many compute-intensive deep learning tasks. In this talk, we discuss the philosophy behind Deep Learning Pipelines, as well as the main tools it provides, how they fit into the deep learning ecosystem, and how they demonstrate Spark's role in deep learning. About: [[Databricks]] provides a unified data analytics platform, powered by Apache Spark™, that accelerates innovation by unifying data science, engineering and business. Website: https://databricks.com | + | </b><br>Deep learning has shown tremendous successes, yet it often requires a lot of effort to leverage its power. Existing deep learning frameworks require [[Writing/Publishing|writing]] a lot of code to run a model, let alone in a distributed manner. Deep Learning Pipelines is an Apache Spark Package library that makes practical deep learning simple based on the Spark MLlib Pipelines API. Leveraging Spark, Deep Learning Pipelines scales out many compute-intensive deep learning tasks. In this talk, we discuss the philosophy behind Deep Learning Pipelines, as well as the main tools it provides, how they fit into the deep learning ecosystem, and how they demonstrate Spark's role in deep learning. About: [[Databricks]] provides a unified data analytics platform, powered by Apache Spark™, that accelerates innovation by unifying data science, engineering and business. Website: https://databricks.com |
|} | |} | ||
|<!-- M --> | |<!-- M --> | ||
Line 731: | Line 734: | ||
<youtube>uClvvlfJxqo</youtube> | <youtube>uClvvlfJxqo</youtube> | ||
<b>[[Python]] Lunch Break presents MLOps | <b>[[Python]] Lunch Break presents MLOps | ||
− | </b><br>Deploy your [[Assistants#Chatbot | Chatbot]] using CI/CD by William Arias Leverage DevOps practices in your next [[Assistants#Chatbot | Chatbot]] deployments, create pipelines for training and testing your [[Assistants#Chatbot | Chatbot]] before automating the deployment to staging or production. NLP developers can focus on the [[Assistants#Chatbot | Chatbot]] [[development]] and worry less about infrastructure and environments configuration Bio: Born and raised in Colombia, studied Electronic Engineering and specialized in Digital Electronics, former exchange student in Czech Technical University in the faculty of Data Science. I started my career working in Intel as Field Engineer, then moved to Oracle in Bogota. After 5 years in Oracle as Solution Architect I decided to get deeper in my knowledge of Machine Learning and Databases, relocated to Prague in Czech Republic, where I studied Data Science for a while from the academia perspective, later jumped to Solution Architect in CA technologies, later moved on to Developer of Automations and NLP in Adecco. Currently I work at Gitlab as Technical Marketing Engineer, teaching and evangelizing developers about DevOps | + | </b><br>Deploy your [[Assistants#Chatbot | Chatbot]] using CI/CD by William Arias Leverage DevOps practices in your next [[Assistants#Chatbot | Chatbot]] deployments, create pipelines for training and testing your [[Assistants#Chatbot | Chatbot]] before automating the deployment to staging or production. NLP developers can focus on the [[Assistants#Chatbot | Chatbot]] [[development]] and worry less about infrastructure and environments configuration Bio: Born and raised in Colombia, studied Electronic Engineering and specialized in Digital Electronics, former exchange student in Czech Technical University in the faculty of Data Science. I started my career working in Intel as Field Engineer, then moved to Oracle in Bogota. After 5 years in Oracle as Solution Architect I decided to get deeper in my knowledge of Machine Learning and Databases, relocated to Prague in Czech Republic, where I studied Data Science for a while from the academia [[perspective]], later jumped to Solution Architect in CA technologies, later moved on to Developer of Automations and NLP in Adecco. Currently I work at Gitlab as Technical Marketing Engineer, teaching and evangelizing developers about DevOps |
|} | |} | ||
|}<!-- B --> | |}<!-- B --> | ||
+ | == <span id="Large Language Model Operations (LLMOps)"></span>Large Language Model Operations (LLMOps) == | ||
+ | * [https://www.databricks.com/glossary/llmops What Is LLMOps? | Databricks] | ||
+ | |||
+ | |||
+ | Large Language Model Operations (LLMOps) is a set of practices and tools for the operational management of large language models (LLMs) in production environments. LLMOps encompasses the entire lifecycle of LLMs, from development and deployment to monitoring and maintenance. LLMOps is important because LLMs are complex and resource-intensive models that require careful management to ensure that they are performing as expected and meeting the needs of their users. LLMOps tools and practices can help to streamline and automate the LLM lifecycle, making it easier to deploy and manage LLMs in production. | ||
+ | |||
+ | Here are some of the key components of LLMOps: | ||
+ | |||
+ | * <b>Model development</b>: LLMOps tools and practices can help to streamline the process of developing new LLMs, from data collection and preparation to model training and evaluation. | ||
+ | * <b>Model deployment</b>: LLMOps tools and practices can help to deploy LLMs to production environments in a safe and efficient manner. | ||
+ | * <b>Model monitoring</b>: LLMOps tools and practices can help to monitor the performance of LLMs in production and identify any problems early on. | ||
+ | * <b>Model maintenance</b>: LLMOps tools and practices can help to maintain LLMs in production, including fine-tuning models to improve their performance and updating models with new data. | ||
+ | |||
+ | |||
+ | LLMOps is still a relatively new field, but it is rapidly growing as more and more organizations adopt LLMs. There are a number of LLMOps tools and platforms available, both open source and proprietary. Here are some of the benefits of using LLMOps: | ||
+ | |||
+ | * <b>Improved efficiency and productivity</b>: LLMOps tools and practices can help to automate and streamline the LLM lifecycle, freeing up data scientists and engineers to focus on more creative and strategic tasks. | ||
+ | * <b>Reduced costs</b>: LLMOps can help to reduce the costs of developing and managing LLMs by improving efficiency and reducing the risk of errors. | ||
+ | * <b>Increased reliability and performance</b>: LLMOps can help to ensure that LLMs are performing reliably and meeting the needs of their users by providing tools and practices for monitoring and maintaining LLMs in production. | ||
+ | |||
+ | === How is LLMOps different from AIOps/MLOps? === | ||
+ | * [https://wandb.ai/mostafaibrahim17/ml-articles/reports/AIOps-vs-MLOps-vs-LLMOps--Vmlldzo1MTQzODMz AIOps vs. MLOps vs. LLMOps | Mostafa Ibrahim - Weights & Biases] | ||
+ | * [https://www.analyticsvidhya.com/blog/2023/08/llmops-vs-mlops/ LLMOPS vs MLOPS: Choosing the Best Path for AI Development | Analytics Vidhya] | ||
+ | * [https://www.xenonstack.com/blog/aiops-vs-mlops AIOps vs MLOps | Know Everything in Detail | Jagreet Kaur - Xenonstack] | ||
+ | |||
+ | |||
+ | LLMOps is a subset of MLOps that focuses specifically on the operational management of large language models (LLMs). MLOps is a set of practices and tools for managing the entire lifecycle of machine learning models, from development to deployment to production. AIOps is a broader field that focuses on using AI and machine learning to improve IT operations and management. Here is a table that summarizes the key differences between LLMOps, AIOps, and MLOps: | ||
+ | |||
+ | <table> | ||
+ | |||
+ | <tr> | ||
+ | <th>Characteristic</th> | ||
+ | <th>LLMOps</th> | ||
+ | <th>AIOps</th> | ||
+ | <th>MLOps</th> | ||
+ | </tr> | ||
+ | |||
+ | |||
+ | <tr> | ||
+ | <td>Focus</td> | ||
+ | <td>Operational management of LLMs</td> | ||
+ | <td>Use of AI and ML to improve IT operations and management</td> | ||
+ | <td>Operational management of machine learning models</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td>Use cases</td> | ||
+ | <td>Language-related tasks, such as chatbots, language translation, sentiment analysis, and content generation</td> | ||
+ | <td>IT operations tasks, such as incident management, capacity planning, and performance monitoring</td> | ||
+ | <td>Machine learning tasks, such as fraud detection, product recommendation, and predictive maintenance</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td>Team composition</td> | ||
+ | <td>NLP specialists, linguists, data scientists with language modeling expertise, and software engineers experienced in building language-related applications</td> | ||
+ | <td>Data scientists, analytics professionals, data engineers, domain experts, and IT professionals</td> | ||
+ | <td>Data scientists, machine learning engineers, software engineers, and DevOps engineers</td> | ||
+ | </tr> | ||
+ | |||
+ | </table> | ||
+ | |||
+ | |||
+ | Here are some examples of how LLMOps is different from AIOps/MLOps: | ||
+ | |||
+ | * <b>Model development</b>: LLMOps tools and practices are specifically designed for developing and training LLMs. For example, LLMOps tools may help to automate the process of collecting and cleaning text data, or they may provide specialized libraries for training LLMs. | ||
+ | * <b>Model deployment</b>: LLMOps tools and practices can help to deploy LLMs to production environments in a way that is optimized for performance and reliability. For example, LLMOps tools may help to scale LLMs to handle large volumes of traffic, or they may provide specialized infrastructure for hosting LLMs. | ||
+ | * <b>Model monitoring and maintenance</b>: LLMOps tools and practices can help to monitor the performance of LLMs in production and identify any problems early on. For example, LLMOps tools may track the accuracy of LLMs on real-world data, or they may detect potential biases in LLMs. | ||
== <span id="Continuous Machine Learning (CML)"></span>Continuous Machine Learning (CML) == | == <span id="Continuous Machine Learning (CML)"></span>Continuous Machine Learning (CML) == |
Latest revision as of 15:54, 28 April 2024
YouTube ... Quora ...Google search ...Google News ...Bing News
- AI Solver ... Algorithms ... Administration ... Model Search ... Discriminative vs. Generative ... Train, Validate, and Test
- Development ... Notebooks ... AI Pair Programming ... Codeless ... Hugging Face ... AIOps/MLOps ... AIaaS/MLaaS
- Managed Vocabularies
- Analytics ... Visualization ... Graphical Tools ... Diagrams & Business Analysis ... Requirements ... Loop ... Bayes ... Network Pattern
- Backpropagation ... FFNN ... Forward-Forward ... Activation Functions ...Softmax ... Loss ... Boosting ... Gradient Descent ... Hyperparameter ... Manifold Hypothesis ... PCA
- Strategy & Tactics ... Project Management ... Best Practices ... Checklists ... Project Check-in ... Evaluation ... Measures
- Artificial General Intelligence (AGI) to Singularity ... Curious Reasoning ... Emergence ... Moonshots ... Explainable AI ... Automated Learning
- NLP Workbench / Pipeline
- Data Science ... Governance ... Preprocessing ... Exploration ... Interoperability ... Master Data Management (MDM) ... Bias and Variances ... Benchmarks ... Datasets
- Data Quality ...validity, accuracy, cleaning, completeness, consistency, encoding, padding, augmentation, labeling, auto-tagging, normalization, standardization, and imbalanced data
- Building Your Environment
- Service Capabilities
- AI Marketplace & Toolkit/Model Interoperability
- Directed Acyclic Graph (DAG) - programming pipelines
- Containers; Docker, Kubernetes & Microservices
- Automate your data lineage
- Benefiting from AI: A different approach to data management is needed
- Libraries & Frameworks Overview ... Libraries & Frameworks ... Git - GitHub and GitLab ... Other Coding options
- Use a Pipeline to Chain PCA with a RandomForest Classifier Jupyter Notebook | Jon Tupitza
- ML.NET Model Lifecycle with Azure DevOps CI/CD pipelines | Cesar de la Torre - Microsoft
- A Great Model is Not Enough: Deploying AI Without Technical Debt | DataKitchen - Medium
- Infrastructure Tools for Production | Aparna Dhinakaran - Towards Data Science ...Model Deployment and Serving
- Global Community for Artificial Intelligence (AI) in Master Data Management (MDM) | Camelot Management Consultants
- Particle Swarms for Dynamic Optimization Problems | T. Blackwell, J. Branke, and X. Li
- 5G_Security
Contents
Tools
- Google AutoML automatically build and deploy state-of-the-art machine learning models
- TensorBoard | Google
- Kubeflow Pipelines - a platform for building and deploying portable, scalable machine learning (ML) workflows based on Docker containers. Introducing AI Hub and Kubeflow Pipelines: Making AI simpler, faster, and more useful for businesses | Google
- SageMaker | Amazon
- MLOps | Microsoft ...model management, deployment, and monitoring with Azure
- Ludwig - a Python toolbox from Uber that allows to train and test deep learning models
- TPOT a Python library that automatically creates and optimizes full machine learning pipelines using genetic programming. Not for NLP, strings need to be coded to numerics.
- H2O Driverless AI for automated Visualization, feature engineering, model training, hyperparameter optimization, and explainability.
- alteryx: Feature Labs, Featuretools
- MLBox Fast reading and distributed data preprocessing/cleaning/formatting. Highly robust feature selection and leak detection. Accurate hyper-parameter optimization in high-dimensional space. State-of-the art predictive models for classification and regression (Deep Learning, Stacking, LightGBM,…). Prediction with models interpretation. Primarily Linux.
- auto-sklearn algorithm selection and hyperparameter tuning. It leverages recent advantages in Bayesian optimization, meta-learning and ensemble construction.is a Bayesian hyperparameter optimization layer on top of scikit-learn. Not for large datasets.
- Auto Keras is an open-source Python package for neural architecture search.
- ATM -auto tune models - a multi-tenant, multi-data system for automated machine learning (model selection and tuning). ATM is an open source software library under the Human Data Interaction project (HDI) at MIT.
- Auto-WEKA is a Bayesian hyperparameter optimization layer on top of Weka. Weka is a collection of machine learning algorithms for data mining tasks. The algorithms can either be applied directly to a dataset or called from your own Java code. Weka contains tools for data pre-processing, classification, regression, clustering, association rules, and visualization.
- TransmogrifAI - an AutoML library for building modular, reusable, strongly typed machine learning workflows. A Scala/SparkML library created by Salesforce for automated data cleansing, feature engineering, model selection, and hyperparameter optimization
- RECIPE - a framework based on grammar-based genetic programming that builds customized scikit-learn classification pipelines.
- AutoMLC Automated Multi-Label Classification. GA-Auto-MLC and Auto-MEKAGGP are freely-available methods that perform automated multi-label classification on the MEKA software.
- Databricks MLflow an open source framework to manage the complete Machine Learning lifecycle using Managed MLflow as an integrated service with the Databricks Unified Analytics Platform... ...manage the ML lifecycle, including experimentation, reproducibility and deployment
- SAS Viya automates the process of data cleansing, data transformations, feature engineering, algorithm matching, model training and ongoing governance.
- Comet ML ...self-hosted and cloud-based meta machine learning platform allowing data scientists and teams to track, compare, explain and optimize experiments and models
- Domino Model Monitor (DMM) | Domino ...monitor the performance of all models across your entire organization
- Weights and Biases ...experiment tracking, model optimization, and dataset versioning
- SigOpt ...optimization platform and API designed to unlock the potential of modeling pipelines. This fully agnostic software solution accelerates, amplifies, and scales the model development process
- DVC ...Open-source Version Control System for Machine Learning Projects
- ModelOp Center | ModelOp
- Moogsoft and Red Hat Ansible Tower
- DSS | Dataiku
- Model Manager | SAS
- Machine Learning Operations (MLOps) | DataRobot ...build highly accurate predictive models with full transparency
- Metaflow, Netflix and AWS open source Python library
Master Data Management (MDM)
YouTube search... ...Google search Feature Store / Data Lineage / Data Catalog
- Data Science ... Governance ... Preprocessing ... Exploration ... Interoperability ... Master Data Management (MDM) ... Bias and Variances ... Benchmarks ... Datasets
|
|
|
|
|
|
Versioning
YouTube search... ...Google search
- DVC | DVC.org
- Pachyderm …Pachyderm for data scientists | Gerben Oostra - bigdata - Medium
- Dataiku
- Continuous Machine Learning (CML)
|
|
|
|
|
|
Model Versioning - ModelDB
- ModelDB: An open-source system for Machine Learning model versioning, metadata, and experiment management
|
|
Hyperparameter
YouTube search... ...Google search
- Backpropagation ... FFNN ... Forward-Forward ... Activation Functions ...Softmax ... Loss ... Boosting ... Gradient Descent ... Hyperparameter ... Manifold Hypothesis ... PCA
- Hypernetworks
- Using TensorFlow Tuning
- Understanding Hyperparameters and its Optimisation techniques | Prabhu - Towards Data Science
- How To Make Deep Learning Models That Don’t Suck | Ajay Uppili Arasanipalai
- Optimization Methods
In machine learning, a hyperparameter is a parameter whose value is set before the learning process begins. By contrast, the values of other parameters are derived via training. Different model training algorithms require different hyperparameters, some simple algorithms (such as ordinary least squares regression) require none. Given these hyperparameters, the training algorithm learns the parameters from the data. Hyperparameter (machine learning) | Wikipedia
Machine learning algorithms train on data to find the best set of weights for each independent variable that affects the predicted value or class. The algorithms themselves have variables, called hyperparameters. They’re called hyperparameters, as opposed to parameters, because they control the operation of the algorithm rather than the weights being determined. The most important hyperparameter is often the learning rate, which determines the step size used when finding the next set of weights to try when optimizing. If the learning rate is too high, the gradient descent may quickly converge on a plateau or suboptimal point. If the learning rate is too low, the gradient descent may stall and never completely converge. Many other common hyperparameters depend on the algorithms used. Most algorithms have stopping parameters, such as the maximum number of epochs, or the maximum time to run, or the minimum improvement from epoch to epoch. Specific algorithms have hyperparameters that control the shape of their search. For example, a Random Forest (or) Random Decision Forest Classifier has hyperparameters for minimum samples per leaf, max depth, minimum samples at a split, minimum weight fraction for a leaf, and about 8 more. Machine learning algorithms explained | Martin Heller - InfoWorld
Hyperparameter Tuning
Hyperparameters are the variables that govern the training process. Your model parameters are optimized (you could say "tuned") by the training process: you run data through the operations of the model, compare the resulting prediction with the actual value for each data instance, evaluate the accuracy, and adjust until you find the best combination to handle the problem. These algorithms automatically adjust (learn) their internal parameters based on data. However, there is a subset of parameters that is not learned and that have to be configured by an expert. Such parameters are often referred to as “hyperparameters” — and they have a big impact ...For example, the tree depth in a decision tree model and the number of layers in an artificial Neural Network are typical hyperparameters. The performance of a model can drastically depend on the choice of its hyperparameters. Machine learning algorithms and the art of hyperparameter selection - A review of four optimization strategies | Mischa Lisovyi and Rosaria Silipo - TNW
There are four commonly used optimization strategies for hyperparameters:
- Bayesian optimization
- Grid search
- Random search
- Hill climbing
Bayesian optimization tends to be the most efficient. You would think that tuning as many hyperparameters as possible would give you the best answer. However, unless you are running on your own personal hardware, that could be very expensive. There are diminishing returns, in any case. With experience, you’ll discover which hyperparameters matter the most for your data and choice of algorithms. Machine learning algorithms explained | Martin Heller - InfoWorld
Hyperparameter Optimization libraries:
- hyper-engine - Gaussian Process Bayesian optimization and some other techniques, like learning curve prediction
- Ray Tune: Hyperparameter Optimization Framework
- SigOpt’s API tunes your model’s parameters through state-of-the-art Bayesian optimization
- hyperopt; Distributed Asynchronous Hyperparameter Optimization in Python - random search and tree of parzen estimators optimization.
- Scikit-Optimize, or skopt - Gaussian process Bayesian optimization
- polyaxon
- GPyOpt; Gaussian Process Optimization
Tuning:
- Optimizer type
- Learning rate (fixed or not)
- Epochs
- Regularization rate (or not)
- Type of Regularization - L1, L2, ElasticNet
- Search type for local minima
- Gradient descent
- Simulated
- Annealing
- Evolutionary
- Decay rate (or not)
- Momentum (fixed or not)
- Nesterov Accelerated Gradient momentum (or not)
- Batch size
- Fitness measurement type
- MSE, accuracy, MAE, Cross-Entropy Loss
- Precision, recall
- Stop criteria
Automated Learning
YouTube search... ...Google search
- Artificial General Intelligence (AGI) to Singularity ... Curious Reasoning ... Emergence ... Moonshots ... Explainable AI ... Automated Learning
- Immersive Reality ... Metaverse ... Omniverse ... Transhumanism ... Religion
- Telecommunications ... Computer Networks ... 5G ... Satellite Communications ... Quantum Communications ... Communication Agents ... Smart Cities ... Digital Twin ... Internet of Things (IoT)
- Large Language Model (LLM) ... Multimodal ... Foundation Models (FM) ... Generative Pre-trained ... Transformer ... GPT-4 ... GPT-5 ... Attention ... GAN ... BERT
- Development ... Notebooks ... AI Pair Programming ... Codeless, Generators, Drag n' Drop ... AIOps/MLOps ... AIaaS/MLaaS
- AdaNet
- Creatives ... History of Artificial Intelligence (AI) ... Neural Network History ... Rewriting Past, Shape our Future ... Archaeology ... Paleontology
- Generative Pre-trained Transformer 5 (GPT-5)
- AI Software Learns to Make AI Software
- The Pentagon Wants AI to Take Over the Scientific Process | Automating Scientific Knowledge Extraction (ASKE) | DARPA
- Hallucinogenic Deep Reinforcement Learning Using Python and Keras | David Foster
- Automated Feature Engineering in Python - How to automatically create machine learning features | Will Koehrsen - Towards Data Science
- Why Meta-learning is Crucial for Further Advances of Artificial Intelligence? | Pavel Kordik
- Assured Autonomy | Dr. Sandeep Neema, DARPA
- Automatic Machine Learning is Broken | Piotr Plonski - KDnuggets
- Why 2020 will be the Year of Automated Machine Learning | Senthil Ravindran - Gigabit
- Meta Learning | Wikipedia
Several production machine-learning platforms now offer automatic hyperparameter tuning. Essentially, you tell the system what hyperparameters you want to vary, and possibly what metric you want to optimize, and the system sweeps those hyperparameters across as many runs as you allow. (Google Cloud hyperparameter tuning extracts the appropriate metric from the TensorFlow model, so you don’t have to specify it.)
An emerging class of data science toolkit that is finally making machine learning accessible to business subject matter experts. We anticipate that these innovations will mark a new era in data-driven decision support, where business analysts will be able to access and deploy machine learning on their own to analyze hundreds and thousands of dimensions simultaneously. Business analysts at highly competitive organizations will shift from using visualization tools as their only means of analysis, to using them in concert with AML. Data visualization tools will also be used more frequently to communicate model results, and to build task-oriented user interfaces that enable stakeholders to make both operational and strategic decisions based on output of scoring engines. They will also continue to be a more effective means for analysts to perform inverse analysis when one is seeking to identify where relationships in the data do not exist. 'Five Essential Capabilities: Automated Machine Learning' | Gregory Bonnette
H2O Driverless AI automatically performs feature engineering and hyperparameter tuning, and claims to perform as well as Kaggle masters. AmazonML SageMaker supports hyperparameter optimization. Microsoft Azure Machine Learning AutoML automatically sweeps through features, algorithms, and hyperparameters for basic machine learning algorithms; a separate Azure Machine Learning hyperparameter tuning facility allows you to sweep specific hyperparameters for an existing experiment. Google Cloud AutoML implements automatic deep transfer learning (meaning that it starts from an existing Deep Neural Network (DNN) trained on other data) and neural architecture search (meaning that it finds the right combination of extra network layers) for language pair translation, natural language classification, and image classification. Review: Google Cloud AutoML is truly automated machine learning | Martin Heller
|
|
AutoML
YouTube search... ...Google search
- Automated Machine Learning (AutoML) | Wikipedia
- AutoML.org ...ML Freiburg ... GitHub and ML Hannover
New cloud software suite of machine learning tools. It’s based on Google’s state-of-the-art research in image recognition called Neural Architecture Search (NAS). NAS is basically an algorithm that, given your specific dataset, searches for the most optimal neural network to perform a certain task on that dataset. AutoML is then a suite of machine learning tools that will allow one to easily train high-performance deep networks, without requiring the user to have any knowledge of deep learning or AI; all you need is labelled data! Google will use NAS to then find the best network for your specific dataset and task. AutoKeras: The Killer of Google’s AutoML | George Seif - KDnuggets
Automatic Machine Learning (AML)
Self-Learning
DARTS: Differentiable Architecture Search
YouTube search... ...Google search
- DARTS: Differentiable Architecture Search | H. Liu, K. Simonyan, and Y. Yang addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, the method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent.
- Neural Architecture Search | Debadeepta Dey - Microsoft Research
AIOps/MLOps
Youtube search... ...Google search
- Development ... Notebooks ... AI Pair Programming ... Codeless ... Hugging Face ... AIOps/MLOps ... AIaaS/MLaaS
- MLflow
- ChatGPT integration with DevSecOps
- DevOps.com
- A Silver Bullet For CIOs; Three ways AIOps can help IT leaders get strategic - Lisa Wolfe - Forbes
- MLOps: What You Need To Know | Tom Taulli - Forbes
- What is so Special About AIOps for Mission Critical Workloads? | Rebecca James - DevOps
- What is AIOps? Artificial Intelligence for IT Operations Explained | BMC
- AIOps: Artificial Intelligence for IT Operations, Modernize and transform IT Operations with solutions built on the only Data-to-Everything platform | splunk>
- How to Get Started With AIOps | Susan Moore - Gartner
- Why AI & ML Will Shake Software Testing up in 2019 | Oleksii Kharkovyna - Medium
- Continuous Delivery for Machine Learning D. Sato, A. Wider and C. Windheuser - MartinFowler
- Defense: Adaptive Acquisition Framework (AAF)
Machine learning capabilities give IT operations teams contextual, actionable insights to make better decisions on the job. More importantly, AIOps is an approach that transforms how systems are automated, detecting important signals from vast amounts of data and relieving the operator from the headaches of managing according to tired, outdated runbooks or policies. In the AIOps future, the environment is continually improving. The administrator can get out of the impossible business of refactoring rules and policies that are immediately outdated in today’s modern IT environment. Now that we have AI and machine learning technologies embedded into IT operations systems, the game changes drastically. AI and machine learning-enhanced automation will bridge the gap between DevOps and IT Ops teams: helping the latter solve issues faster and more accurately to keep pace with business goals and user needs. How AIOps Helps IT Operators on the Job | Ciaran Byrne - Toolbox
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Large Language Model Operations (LLMOps)
Large Language Model Operations (LLMOps) is a set of practices and tools for the operational management of large language models (LLMs) in production environments. LLMOps encompasses the entire lifecycle of LLMs, from development and deployment to monitoring and maintenance. LLMOps is important because LLMs are complex and resource-intensive models that require careful management to ensure that they are performing as expected and meeting the needs of their users. LLMOps tools and practices can help to streamline and automate the LLM lifecycle, making it easier to deploy and manage LLMs in production.
Here are some of the key components of LLMOps:
- Model development: LLMOps tools and practices can help to streamline the process of developing new LLMs, from data collection and preparation to model training and evaluation.
- Model deployment: LLMOps tools and practices can help to deploy LLMs to production environments in a safe and efficient manner.
- Model monitoring: LLMOps tools and practices can help to monitor the performance of LLMs in production and identify any problems early on.
- Model maintenance: LLMOps tools and practices can help to maintain LLMs in production, including fine-tuning models to improve their performance and updating models with new data.
LLMOps is still a relatively new field, but it is rapidly growing as more and more organizations adopt LLMs. There are a number of LLMOps tools and platforms available, both open source and proprietary. Here are some of the benefits of using LLMOps:
- Improved efficiency and productivity: LLMOps tools and practices can help to automate and streamline the LLM lifecycle, freeing up data scientists and engineers to focus on more creative and strategic tasks.
- Reduced costs: LLMOps can help to reduce the costs of developing and managing LLMs by improving efficiency and reducing the risk of errors.
- Increased reliability and performance: LLMOps can help to ensure that LLMs are performing reliably and meeting the needs of their users by providing tools and practices for monitoring and maintaining LLMs in production.
How is LLMOps different from AIOps/MLOps?
- AIOps vs. MLOps vs. LLMOps | Mostafa Ibrahim - Weights & Biases
- LLMOPS vs MLOPS: Choosing the Best Path for AI Development | Analytics Vidhya
- AIOps vs MLOps | Know Everything in Detail | Jagreet Kaur - Xenonstack
LLMOps is a subset of MLOps that focuses specifically on the operational management of large language models (LLMs). MLOps is a set of practices and tools for managing the entire lifecycle of machine learning models, from development to deployment to production. AIOps is a broader field that focuses on using AI and machine learning to improve IT operations and management. Here is a table that summarizes the key differences between LLMOps, AIOps, and MLOps:
Characteristic | LLMOps | AIOps | MLOps |
---|---|---|---|
Focus | Operational management of LLMs | Use of AI and ML to improve IT operations and management | Operational management of machine learning models |
Use cases | Language-related tasks, such as chatbots, language translation, sentiment analysis, and content generation | IT operations tasks, such as incident management, capacity planning, and performance monitoring | Machine learning tasks, such as fraud detection, product recommendation, and predictive maintenance |
Team composition | NLP specialists, linguists, data scientists with language modeling expertise, and software engineers experienced in building language-related applications | Data scientists, analytics professionals, data engineers, domain experts, and IT professionals | Data scientists, machine learning engineers, software engineers, and DevOps engineers |
Here are some examples of how LLMOps is different from AIOps/MLOps:
- Model development: LLMOps tools and practices are specifically designed for developing and training LLMs. For example, LLMOps tools may help to automate the process of collecting and cleaning text data, or they may provide specialized libraries for training LLMs.
- Model deployment: LLMOps tools and practices can help to deploy LLMs to production environments in a way that is optimized for performance and reliability. For example, LLMOps tools may help to scale LLMs to handle large volumes of traffic, or they may provide specialized infrastructure for hosting LLMs.
- Model monitoring and maintenance: LLMOps tools and practices can help to monitor the performance of LLMs in production and identify any problems early on. For example, LLMOps tools may track the accuracy of LLMs on real-world data, or they may detect potential biases in LLMs.
Continuous Machine Learning (CML)
- Continuous Machine Learning (CML) ...is Continuous Integration/Continuous Deployment (CI/CD) for Machine Learning Projects
- DVC | DVC.org
|
|
DevSecOps
Youtube search... ...Google search
- Cybersecurity
- Containers; Docker, Kubernetes & Microservices
- SafeCode ...nonprofit organization that brings business leaders and technical experts together to exchange insights and ideas on creating, improving and promoting scalable and effective software security programs.
- 3 ways AI will advance DevSecOps | Joseph Feiman - TechBeacon
- Leveraging AI and Automation for Successful DevSecOps | Vishnu Nallani - DZone
DevSecOps (also known as SecDevOps and DevOpsSec) is the process of integrating secure development best practices and methodologies into continuous design, development, deployment and integration processes
|
|
|
|
|
|
DevSecOps in Government
- Evaluation: DevSecOps Guide | General Services Administration (GSA)
- Cybersecurity & Acquisition Lifecycle Integration
- Defense|DOD Enterprise DevSecOps Reference Design Version 1.0 12 August 2019 | Department of Defense (DOD) Chief Information Officer (CIO
- Understanding the Differences Between Agile & DevSecOps - from a Business Perspective | General Services Administration (GSA)
|
|
|
|
Strangler Fig / Strangler Pattern
- Strangler Fig Application | Martin Fowler
- The Strangler pattern in practice | Michiel Rook
- Strangler pattern ...Cloud Design Patterns | Microsoft
- How to use strangler pattern for microservices modernization | N. Natean - Software Intelligence Plus
- Serverless Strangler Pattern on AWS | Ryan Means - Medium ... Serverless
Strangulation of a legacy or undesirable solution is a safe way to phase one thing out for something better, cheaper, or more expandable. You make something new that obsoletes a small percentage of something old, and put them live together. You do some more work in the same style, and go live again (rinse, repeat). Strangler Applications | Paul Hammant ...case studies
|
|
Model Monitoring
YouTube search... ...Google search
- How do you evaluate the performance of a machine learning model that's deployed into production? | Quora
- Why your Models need Maintenance | Martin Schmitz - Towards Data Science ...Change of concept & drift of Concept
- Deployed your Machine Learning Model? Here’s What you Need to Know About Post-Production Monitoring | Om Deshmukh - Analytics Vidhya ...proactive & reactive model monitoring
Monitoring production systems is essential to keeping them running well. For ML systems, monitoring becomes even more important, because their performance depends not just on factors that we have some control over, like infrastructure and our own software, but also on data, which we have much less control over. Therefore, in addition to monitoring standard metrics like latency, traffic, errors and saturation, we also need to monitor model prediction performance. An obvious challenge with monitoring model performance is that we usually don’t have a verified label to compare our model’s predictions to, since the model works on new data. In some cases we might have some indirect way of assessing the model’s effectiveness, for example by measuring click rate for a recommendation model. In other cases, we might have to rely on comparisons between time periods, for example by calculating a percentage of positive classifications hourly and alerting if it deviates by more than a few percent from the average for that time. Just like when validating the model, it’s also important to monitor metrics across slices, and not just globally, to be able to detect problems affecting specific segments. ML Ops: Machine Learning as an Engineering Discipline | Cristiano Breuel - Towards Data Science
|
|
|
|
A/B Testing
YouTube search... ...Google search
A/B testing (also known as bucket testing or split-run testing) is a user experience research methodology. A/B tests consist of a randomized experiment with two variants, A and B. It includes application of statistical hypothesis testing or "two-sample hypothesis testing" as used in the field of statistics. A/B testing is a way to compare two versions of a single variable, typically by testing a subject's response to variant A against variant B, and determining which of the two variants is more effective. A/B testing | Wikipedia
A randomized controlled trial (or randomized control trial; RCT) is a type of scientific (often medical) experiment that aims to reduce certain sources of bias when testing the effectiveness of new treatments; this is accomplished by randomly allocating subjects to two or more groups, treating them differently, and then comparing them with respect to a measured response. One group—the experimental group—receives the intervention being assessed, while the other—usually called the control group—receives an alternative treatment, such as a placebo or no intervention. The groups are monitored under conditions of the trial design to determine the effectiveness of the experimental intervention, and efficacy is assessed in comparison to the control. There may be more than one treatment group or more than one control group. Randomized controlled trial | Wikipedia
|
|
Scoring Deployed Models
YouTube search... ...Google search
|
|