Difference between revisions of "Evaluation"
m |
m |
||
| Line 18: | Line 18: | ||
** [[Train, Validate, and Test]] | ** [[Train, Validate, and Test]] | ||
** [[Model Monitoring]] | ** [[Model Monitoring]] | ||
| + | * [[Cybersecurity: Evaluating & Selling]] | ||
* [[Strategy & Tactics]] | * [[Strategy & Tactics]] | ||
* [[AIOps / MLOps]] | * [[AIOps / MLOps]] | ||
| Line 79: | Line 80: | ||
** Does data management reflected in the [[AIOps / MLOps]] pipeline/toolchain processes/architecture? | ** Does data management reflected in the [[AIOps / MLOps]] pipeline/toolchain processes/architecture? | ||
** Are the end-to-end visibility and bottleneck risks for [[AIOps / MLOps]] pipeline/toolchain reflected in the risk register with mitigation strategy for each risk? | ** Are the end-to-end visibility and bottleneck risks for [[AIOps / MLOps]] pipeline/toolchain reflected in the risk register with mitigation strategy for each risk? | ||
| − | |||
{|<!-- T --> | {|<!-- T --> | ||
| Line 125: | Line 125: | ||
= <span id="Procuring"></span>Procuring = | = <span id="Procuring"></span>Procuring = | ||
| − | + | ||
{|<!-- T --> | {|<!-- T --> | ||
| valign="top" | | | valign="top" | | ||
| Line 185: | Line 185: | ||
<b>Shawn Scully: Production and Beyond: Deploying and Managing Machine Learning Models | <b>Shawn Scully: Production and Beyond: Deploying and Managing Machine Learning Models | ||
</b><br>PyData NYC 2015 Machine learning has become the key component in building intelligence-infused applications. However, as companies increase the number of such deployments, the number of machine learning models that need to be created, maintained, monitored, tracked, and improved grow at a tremendous pace. This growth has lead to a huge (and well-documented) accumulation of technical debt. Developing a machine learning application is an iterative process that involves building multiple models over a dataset. The dataset itself evolves over time as new features and new data points are collected. Furthermore, once deployed, the models require updates over time. Changes in models and datasets become difficult to track over time, and one can quickly lose track of which version of the model used which data and why it was subsequently replaced. In this talk, we outline some of the key challenges in large-scale deployments of many interacting machine learning models. We then describe a methodology for management, monitoring, and optimization of such models in production, which helps mitigate the technical debt. In particular, we demonstrate how to: Track models and versions, and visualize their quality over time Track the provenance of models and datasets, and quantify how changes in data impact the models being served Optimize model ensembles in real time, based on changing data, and provide alerts when such ensembles no longer provide the desired accuracy. | </b><br>PyData NYC 2015 Machine learning has become the key component in building intelligence-infused applications. However, as companies increase the number of such deployments, the number of machine learning models that need to be created, maintained, monitored, tracked, and improved grow at a tremendous pace. This growth has lead to a huge (and well-documented) accumulation of technical debt. Developing a machine learning application is an iterative process that involves building multiple models over a dataset. The dataset itself evolves over time as new features and new data points are collected. Furthermore, once deployed, the models require updates over time. Changes in models and datasets become difficult to track over time, and one can quickly lose track of which version of the model used which data and why it was subsequently replaced. In this talk, we outline some of the key challenges in large-scale deployments of many interacting machine learning models. We then describe a methodology for management, monitoring, and optimization of such models in production, which helps mitigate the technical debt. In particular, we demonstrate how to: Track models and versions, and visualize their quality over time Track the provenance of models and datasets, and quantify how changes in data impact the models being served Optimize model ensembles in real time, based on changing data, and provide alerts when such ensembles no longer provide the desired accuracy. | ||
| + | |} | ||
| + | |}<!-- B --> | ||
| + | |||
| + | = Vision = | ||
| + | * [[Creatives]] | ||
| + | {|<!-- T --> | ||
| + | | valign="top" | | ||
| + | {| class="wikitable" style="width: 550px;" | ||
| + | || | ||
| + | <youtube>uBxM0RTHd28</youtube> | ||
| + | <b>Who Makes AI Projects Successful | ||
| + | </b><br>Business leaders often have high expectations of AI/ML projects, and are sorely disappointed when things don't work out. AI implementations are more than just solving the technology problem. There are many other aspects to consider, and you'll need someone who has strong knowledge and background in business, technology (especially AI/ML), and data to guide the business on projects to take on, strategic direction, updates, and many other aspects. In this video, I call out the need for such a role because the underlying paradigm of software development is shifting. Here's what I can do to help you. I speak on the topics of architecture and AI, help you integrate AI into your organization, educate your team on what AI can or cannot do, and make things simple enough that you can take action from your new knowledge. I work with your organization to understand the nuances and challenges that you face, and together we can understand, frame, analyze, and address challenges in a systematic way so you see improvement in your overall business, is aligned with your strategy, and most importantly, you and your organization can incrementally change to transform and thrive in the future. If any of this sounds like something you might need, please reach out to me at dr.raj.ramesh@topsigma.com, and we'll get back in touch within a day. Thanks for watching my videos and for subscribing. www.topsigma.com www.linkedin.com/in/rajramesh | ||
| + | |} | ||
| + | |<!-- M --> | ||
| + | | valign="top" | | ||
| + | {| class="wikitable" style="width: 550px;" | ||
| + | || | ||
| + | <youtube>7CcSm0PAr-Y</youtube> | ||
| + | <b>How Should We Evaluate Machine Learning for AI?: Percy Liang | ||
| + | </b><br>Machine learning has undoubtedly been hugely successful in driving progress in AI, but it implicitly brings with it the train-test evaluation paradigm. This standard evaluation only encourages behavior that is good on average; it does not ensure robustness as demonstrated by adversarial examples, and it breaks down for tasks such as dialogue that are interactive or do not have a correct answer. In this talk, I will describe alternative evaluation paradigms with a focus on natural language understanding tasks, and discuss ramifications for guiding progress in AI in meaningful directions. Percy Liang is an Assistant Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His research spans machine learning and natural language processing, with the goal of developing trustworthy agents that can communicate effectively with people and improve over time through interaction. Specific topics include question answering, dialogue, program induction, interactive learning, and reliable machine learning. His awards include the IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014). | ||
|} | |} | ||
|}<!-- B --> | |}<!-- B --> | ||
Revision as of 14:59, 6 September 2020
YouTube search... ...Google search
- Evaluation
- Cybersecurity: Evaluating & Selling
- Strategy & Tactics
- AIOps / MLOps
- Automated Scoring
- Imbalanced Data
- Five ways to evaluate AI systems | Felix Wetzel - Recruiting Daily
- Cyber Security Evaluation Tool (CSET®) ...provides a systematic, disciplined, and repeatable approach for evaluating an organization’s security posture.
- 3 Common Technical Debts in Machine Learning and How to Avoid Them | Derek Chia - Towards Data Science
Many products today leverage artificial intelligence for a wide range of industries, from healthcare to marketing. However, most business leaders who need to make strategic and procurement decisions about these technologies have no formal AI background or academic training in data science. The purpose of this article is to give business people with no AI expertise a general guideline on how to assess an AI-related product to help decide whether it is potentially relevant to their business. How to Assess an Artificial Intelligence Product or Solution (Even if You’re Not an AI Expert) | Daniel Faggella - Emerj
- What challenge does the AI solve?
- Is the intent of AI to increase performance (detection), reduce costs (predictive maintenance, reduce inventory) , decrease response time, or other outcome(s)?
- What analytics is the AI resolving? Descriptive (what happened?), Diagnostic (why did it happen?), Predictive/Preventive (what could happen?), Prescriptive (what should happen?), Cognitive (what steps should be taken?)
- What is the clear and realistic way of measuring the success of the AI initiative?
- Is the organization using the implementation to gain better capability in the future?
- Does the AI reside in a procured item/application/solution or developed in house?
- If the AI is procured, e.g. embedded in sensor product, what items are included in the contract to future proof the solution?
- Contract items to protect organization reuse data rights?
- Are Best Practices being followed?
- What Evaluation - Measures are documented? Are the Measures used correctly?
- What is the ML Test Score?
- What is the current inference/prediction/true positive rate (TPR) rate?
- How perfect does AI have to be to trust it? What is the inference/prediction rate performance metric for the Program?
- What is the false-positive rate? How does AI reduce false-positives without increasing false negatives? What is the false-positive rate performance metric for the Program? Is there a Receiver Operating Characteristic (ROC) curve; plotting the true positive rate (TPR) against the false positive rate (FPR) ?
- Has the data been identified for AI (current application or for future use) initiative(s)?
- Is the data labelled, or require manual labeling?
- Have the key features to be used in the AI model been identified? If needed, what are the algorithms used to combine AI features? What is the approximate number of features used?
- How are the dataset(s) used for AI training, testing and Validation managed? Are logs kept on which data is used for different executions/training so that the information used is traceable? How is the access to the information guaranteed?
- Are the dataset(s) for AI published (repo, marketplace) for reuse, if so where?
- What are the AI architecture specifics, e.g. Ensemble Learning methods used, graph network, or Distributed learning?
- What AI model type(s) are used? Regression, K-Nearest Neighbors (KNN), [[Graph Convolutional Network (GCN), Graph Neural Networks (Graph Nets), Geometric Deep Learning|Graph Neural Networks], Reinforcement Learning (RL), Association Rule Learning, etc.
- Is Transfer Learning used? If so, which AI models are used? What mission specific dataset(s) are used to tune the AI model?
- Are the AI models published (repo, marketplace) for reuse, if so where?
- Is the AI model reused from a repository (repo, marketplace)? If so, which one? How are you notified of updates? How often is the repository checked for updates?
- Are AI service(s) are used for inference/prediction?
- What AI languages, Libraries & Frameworks, scripting, are implemented? Python, Javascript, PyTorch etc.
- What optimizers are used? Is augmented machine learning (AugML) or automated machine learning (AutoML) used?
- When the AI model is updated, how is it determined that the performance was indeed increased for the better?
- What benchmark standard(s) are the AI model compared/scored? e.g. Global Vectors for Word Representation (GloVe)
- How often is the deployed AI process monitored or measures re-evaluated?
- How is bias accounted for in the AI process? How are the Datasetsdataset(s) used are assured to represent the problem space? What is the process of the removal of features/data that is believed are not relevant? What assurance is provided that the model (algorithm) is not biased?
- Is the model (implemented or to be implemented) explainable? Interpretable? How so?
- Has role/job displacement due to automation and/or AI implementation being addressed?
- Are User and | Entity Behavior Analytics (UEBA) and AI used to help to create a baseline for trusted workload access?
- Is AI being used for Cybersecurity?
- Is AI used protect the Program against targeted attacks, often referred to as advanced targeted attacks (ATAs) or advanced persistent threats (APTs)?
- If the Program is implementing AI, is the Program implementing an AIOps / MLOps pipeline/toolchain?
- What tools are used for the AIOps / MLOps? Please identify those on-premises and online services?
- Are the AI languages, libraries, scripting, and AIOps / MLOps applications registered in the organization?
- Does the Program depict the AIOps / MLOps pipeline/toolchain applications in their tech stack?
- Has the Program where AI is used in the SecDevOps architecture? e.g. software testing
- Does data management reflected in the AIOps / MLOps pipeline/toolchain processes/architecture?
- Are the end-to-end visibility and bottleneck risks for AIOps / MLOps pipeline/toolchain reflected in the risk register with mitigation strategy for each risk?
|
|
ML Test Score
- Machine Learning: The High Interest Credit Card of Technical Debt | | D. Sculley, G Holt, D. Golovin, E. Davydov, T. Phillips, D. Ebner, V. Chaudhary, and M. Young - Google Research
- Hidden Technical Debt in Machine Learning Systems D. Sculley, G Holt, D. Golovin, E. Davydov, T. Phillips, D. Ebner, V. Chaudhary, M. Young, J. Crespo, and D. Dennison - Google Research
Creating reliable, production-level machine learning systems brings on a host of concerns not found in small toy examples or even large offline research experiments. Testing and monitoring are key considerations for ensuring the production-readiness of an ML system, and for reducing technical debt of ML systems. But it can be difficult to formulate specific tests, given that the actual prediction behavior of any given model is difficult to specify a priori. In this paper, we present 28 specific tests and monitoring needs, drawn from experience with a wide range of production ML systems to help quantify these issues and present an easy to follow road-map to improve production readiness and pay down ML technical debt. The ML Test Score: A Rubric for ML Production Readiness and Technical Debt Reduction | E. Breck, S. Cai, E. Nielsen, M. Salib, and D. Sculley - Google Research Full Stack Deep Learning
|
|
Procuring
|
|
Best Practices
|
|
Model Deployment Scoring
|
|
Vision
|
|