Difference between revisions of "ML Test Score"
(Created page with "{{#seo: |title=PRIMO.ai |titlemode=append |keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, Tensorflow, Google, Nvidia, M...") |
(No difference)
|
Revision as of 16:22, 8 September 2020
YouTube search... ...Google search
- Evaluation
- Cybersecurity: Evaluating & Selling
- Strategy & Tactics
- Checklists
- AI Governance
- Automated Scoring
- Risk, Compliance and Regulation
- AIOps / MLOps
- Libraries & Frameworks
- Guidance on the AI auditing framework | Information Commissioner's Office (ICO)
- Technology Readiness Assessments (TRA) Guide | US GAO ...used to evaluate the maturity of technologies and whether they are developed enough to be incorporated into a system without too much risk.
- Cybersecurity Reference and Resource Guide | DOD
- Five ways to evaluate AI systems | Felix Wetzel - Recruiting Daily
- Cyber Security Evaluation Tool (CSET®) ...provides a systematic, disciplined, and repeatable approach for evaluating an organization’s security posture.
- 3 Common Technical Debts in Machine Learning and How to Avoid Them | Derek Chia - Towards Data Science
- New code completeness checklist and reproducibility updates | Facebook AI
- Why you should care about debugging machine learning models | Patrick Hall and Andrew Burt - O'reilly
Many products today leverage artificial intelligence for a wide range of industries, from healthcare to marketing. However, most business leaders who need to make strategic and procurement decisions about these technologies have no formal AI background or academic training in data science. The purpose of this article is to give business people with no AI expertise a general guideline on how to assess an AI-related product to help decide whether it is potentially relevant to their business. How to Assess an Artificial Intelligence Product or Solution (Even if You’re Not an AI Expert) | Daniel Faggella - Emerj
Nature of risks inherent to AI applications: We believe that the challenge in governing AI is less about dealing with completely new types of risk and more about existing risks either being harder to identify in an effective and timely manner, given the complexity and speed of AI solutions, or manifesting themselves in unfamiliar ways. As such, firms do not require completely new processes for dealing with AI, but they will need to enhance existing ones to take into account AI and fill the necessary gaps. The likely impact on the level of resources required, as well as on roles and responsibilities, will also need to be addressed. AI and risk management: Innovating with confidence | Deloitte
- What challenge does the AI investment solve?
- Is the intent of AI to increase performance (detection), reduce costs (predictive maintenance, reduce inventory) , decrease response time, or other outcome(s)?
- How does the AI investment meet the challenge?
- What analytics are being implemented? Descriptive (what happened?), Diagnostic (why did it happen?), Predictive/Preventive (what could happen?), Prescriptive (what should happen?), Cognitive (what steps should be taken?)
- Is AI being used for Cybersecurity? Is AI used protect the AI investment against targeted attacks, often referred to as advanced targeted attacks (ATAs) or advanced persistent threats (APTs)?
- Is the organization using the AI investment to gain better capability in the future?
- Is the right Leadership in place?
- Is Leadership's AI strategy documented and articulated well?
- Does the AI investment strategy align with the organization's overall strategy and values?
- Is the AI investment properly resourced? budgeted, trained staff with key positions filled?
- Responsibility clearly defined and communicated for AI research, performing data science, applied machine intelligence engineering, qualitative assurance, software development, implementing foundational capabilities, user experience, change management, configuration management, security, backup/contingency, domain expertise, and project management
- Is the organization positioned or positioning to scale its current state with AI?
- Does the AI reside in a procured item/application/solution or developed in house?
- If the AI is procured, e.g. embedded in sensor product, what items are included in the contract to future proof the solution?
- Contract items to protect organization reuse data rights?
- Are Best Practices being followed? Is the team trained in the Best Practices?
- What is the Return on Investment (ROI)? Is the AI investment on track with original ROI target?
- What is the clear and realistic way of measuring the success of the AI investment?
- What are the significant measures that indicate the AI investment is achieving success?
- What Evaluation - Measures are documented? Are the Measures being used correctly?
- How would you be able to tell if the AI investment was working properly?
- How perfect does AI have to be to trust it? What is the inference/prediction rate performance metric for the AI investment?
- What is the current inference/prediction/ True Positive Rate (TPR)?
- What is the False Positive Rate (FPR)? How does AI reduce false-positives without increasing false negatives?
- Is there a Receiver Operating Characteristic (ROC) curve; plotting the True Positive Rate (TPR) against the False Positive Rate (FPR)?
- When the AI model is updated, how is it determined that the performance was indeed increased for the better?
- Are response plans, procedures and training in place to address AI attack or failure incidents? How are AI investment’s models audited for security vulnerabilities?
- What is the ML Test Score?
- Does Data Governance treat data as a first-class asset?
- Is Master Data Management (MDM) in place?
- Is there data management plan(ning)? Does data planning address metadata for dataflows and data transitions? data quality?
- Has the data been identified for current AI investment? For future use AI investment(s)?
- Are the internal data resources available and accessible? For external data resources, are contracts in place to make the data available and accessible?
- Are permissions in place to use the data, with privacy constraints considered and mitigated?
- Is the data labelled, or require manual labeling?
- What is the quality of the data; skewed, gaps, clean?
- Is there sufficient amount of data available?
- Have the key features to be used in the AI model been identified? If needed, what are the algorithms used to combine AI features? What is the approximate number of features used?
- How are the dataset(s) used for AI training, testing and Validation managed? Are logs kept on which data is used for different executions/training so that the information used is traceable?
- How is the access to the information guaranteed? Are the dataset(s) for AI published (repo, marketplace) for reuse, if so where?
- What AI Governance is in place?
- What are the AI architecture specifics, e.g. Ensemble Learning methods used, graph network, or Distributed learning?
- What AI model type(s) are used? Regression, K-Nearest Neighbors (KNN), [[Graph Convolutional Network (GCN), Graph Neural Networks (Graph Nets), Geometric Deep Learning|Graph Neural Networks], Reinforcement Learning (RL), Association Rule Learning, etc.
- Is Transfer Learning used? If so, which AI models are used? What mission specific dataset(s) are used to tune the AI model?
- Are the AI models published (repo, marketplace) for reuse, if so where?
- Is the AI model reused from a repository (repo, marketplace)? If so, which one? How are you notified of updates? How often is the repository checked for updates?
- Are AI service(s) are used for inference/prediction?
- What AI languages, Libraries & Frameworks, scripting, are implemented? Python, Javascript, PyTorch etc.
- What optimizers are used? Is augmented machine learning (AugML) or automated machine learning (AutoML) used?
- What benchmark standard(s) are the AI model compared/scored? e.g. Global Vectors for Word Representation (GloVe)
- How often is the deployed AI process monitored or measures re-evaluated?
- How is bias accounted for in the AI process? How are the Datasetsdataset(s) used are assured to represent the problem space? What is the process of the removal of features/data that is believed are not relevant? What assurance is provided that the model (algorithm) is not biased?
- Is the model (implemented or to be implemented) explainable? Interpretable? How so?
- Has role/job displacement due to automation and/or AI implementation being addressed?
- Are User and | Entity Behavior Analytics (UEBA) and AI used to help to create a baseline for trusted workload access?
- What foundational capabilities are defined or in place for the AI investment? infrastructure platform, cloud resources?
- Is the AI investment implementing an AIOps / MLOps pipeline/toolchain?
- What tools are used for the AIOps / MLOps? Please identify those on-premises and online services?
- Are the AI languages, libraries, scripting, and AIOps / MLOps applications registered in the organization?
- Does the AI investment depict the AIOps / MLOps pipeline/toolchain applications in their tech stack?
- Is the AI investment identifies in the SecDevOps architecture?
- Does data management reflected in the AIOps / MLOps pipeline/toolchain processes/architecture?
- Are the end-to-end visibility and bottleneck risks for AIOps / MLOps pipeline/toolchain reflected in the risk register with mitigation strategy for each risk?
ContentsML Test Score
Creating reliable, production-level machine learning systems brings on a host of concerns not found in small toy examples or even large offline research experiments. Testing and monitoring are key considerations for ensuring the production-readiness of an ML system, and for reducing technical debt of ML systems. But it can be difficult to formulate specific tests, given that the actual prediction behavior of any given model is difficult to specify a priori. In this paper, we present 28 specific tests and monitoring needs, drawn from experience with a wide range of production ML systems to help quantify these issues and present an easy to follow road-map to improve production readiness and pay down ML technical debt. The ML Test Score: A Rubric for ML Production Readiness and Technical Debt Reduction | E. Breck, S. Cai, E. Nielsen, M. Salib, and D. Sculley - Google Research Full Stack Deep Learning
Procuring
Best Practices
Leadership
Return on Investment (ROI)
Model Deployment Scoring
Using Historical Incident Data to Reduce Risks
|