Evaluation
YouTube search... ...Google search
Contents
- 1 What challenge does the AI investment solve?
- 2 How does the AI meet the challenge?
- 3 Is the right leadership in place?
- 4 Are best practices being followed?
- 5 What Laws, Regulations and Policies (LRPs) pertain?
- 6 What portion of the AI is developed inhouse and what is/will be procured?
- 7 How is AI success measured?
- 8 What AI governance is in place?
- 9 What is the data governance process?
- 10 How is the AI investment deployed?
- 11 How is production readiness determined?
- 12 How are changes identified and managed?
What challenge does the AI investment solve?
- What mission outcome(s) will be benefited by the AI investment, e.g. to increase revenue (marketing), to be more competitive (gain capability), to increase performance (detection, automation, discovery), reduce costs (optimization, predictive maintenance, reduce inventory), time reduction, provide personalization (recommendations), avoid risk of non-compliance, better communication (user interface, natural-language understanding, telecommunications), broader and better integration (Internet of Things (IoT), smart cities), or other outcome(s)?
- Would you classify the AI investment as being evolutionary, revolutionary, or disruptive?
- What similar functionality exists in other solutions where lessons can be applied to the AI investment? Can the hypothesis be tested? Playing devil's advocate, could there be a flaw in the analogical reasoning?
- Have opportunistic AI aspects of the end-to-end mission process(es) been reviewed?
- Was a knowledge-based approach used for the review? If AI was used for optimizing or simulating the process?
- For each aspect how does the AI augment human users?
- Does the business case for the AI investment define clear objectives?
- Whose need(s) is the AI investment addressing?
- Is there a brochure-type version of requirements shared with stakeholders? Is dialog with stakeholders ongoing?
How does the AI meet the challenge?
- What AI is being implemented? Descriptive (what happened?), Diagnostic (why did it happen?), Predictive/Preventive (what could happen?), Prescriptive (what should happen?), Cognitive (what steps should be taken, Cybersecurity?)
- What algorithms are used or are being considered? How was/will the choice selected?
- What learning techniques have/are planned for the AI investment, e.g. Human-in-the-Loop (HITL) Learning?
- Were there AI pilot(s) prior current investment?
Is the right leadership in place?
- Is leadership's AI strategy documented and articulated well?
- Does the AI investment strategy align with the organization's overall strategy and values?
- Is there a time constraint? Does the schedule meet the Technology Readiness Level (TRL) of the AI investment?
- Is the AI investment properly resourced? budgeted, trained staff with key positions filled?
- Responsibility clearly defined and communicated for AI research, performing data science, applied machine intelligence engineering, qualitative assurance, software development, implementing foundational capabilities, user experience, change management, configuration management, security, backup/contingency, domain expertise, and project management
- Of these identified responsibilities which situations are they outsourced? What strategy is incorporated to convey the AI investment knowledge to the organization?
- Is the organization positioned or positioning to scale its current state with AI?
Are best practices being followed?
- Are best practices documented/referenced?
- Is cybersecurity a component of best practices?
- Is the team trained in the best practices, e.g. AI Governance, Data Governance, AIOps / MLOps?
- What checklists are used?
- Is there a product roadmap?
What Laws, Regulations and Policies (LRPs) pertain?
- Are use cases testable and traceable to requirements, including LRPs?
- When was the last time compliance requirements and regulations were examined? What adjustments were/must be made?
What portion of the AI is developed inhouse and what is/will be procured?
- If the AI is procured/outsourced, e.g. embedded in sensor product, what items are included in the contract to future proof the solution?
- Contract items to protect organization reuse data rights?
- Does acceptance criteria include a proof of capability?
- How well do a vendor's service/product(s) and/or client references compare with the AI investment objectives?
How is AI success measured?
- What are the significant measures that indicate success?
- Are the ways the mission is being measured clear, realistic, and documented? Specifically what are the AI investment's performance measures?
- Are the measures being used correctly?
- What is the Return on Investment (ROI)? Is the AI investment on track with original ROI target?
- If there is/was an Analysis of Alternatives how were these measures used? What were the findings?
- What mission metrics will be impacted with the AI investment? What drivers/measures have the most bearing? Of these performance indicators which can be used as leading indicators of the health of the AI investment?
- What are the specific decisions and activities to impact each driver/measure?
- What assumptions are being made? Of these assumptions, what constraints are anticipated?
- Are there other related AI investments? If so, is this AI investment dependent on the other investment(s)? What investments require this AI investment to be successful? If so, how? Are there mitigation plans in place?
- How would you be able to tell if the AI investment was working properly?
- What benchmarks are the AI model compared/scored? e.g. Global Vectors for Word Representation (GloVe)
- How perfect does AI have to be to trust it?
- What is the inference/prediction rate performance metric for the AI investment?
- What is the current inference/prediction/ True Positive Rate (TPR)?
- What is the False Positive Rate (FPR)? How does AI reduce false-positives without increasing false negatives?
- Is there a Receiver Operating Characteristic (ROC) curve; plotting the True Positive Rate (TPR) against the False Positive Rate (FPR)?
- Is/will A/B testing or multivariate testing be performed?
What AI governance is in place?
- Does AI Governance implement a risk-based approach, e.g. greater consideration or controls for high risk use cases?
- What are the AI architecture specifics, e.g. Ensemble Learning methods used, Graph Convolutional Network (GCN), Graph Neural Networks (Graph Nets), Geometric Deep Learning, Digital Twin, Distributed Learning?
- Is the wetware/brain or hardware involved, e.g. Internet of Things (IoT); physical sensors, mobile phones, screening devices, cameras/surveillance, medical instrumentation, robots, autonomous vehicles, drones, quantum computing, assistants/chatbots?
- What AI algorithms/model type(s) are used? Regression, K-Nearest Neighbors (KNN), Deep Neural Network (DNN), Reinforcement Learning (RL), Natural Language Processing (NLP), Association Rule Learning, etc.
- What learning technique(s) are or will be implemented? If a a transfer process is used, which model(s) and what mission specific dataset(s) are used to tune the AI model?
- What tools are used or will be used for model management?
- How are hyperparmeters managed? What optimizers are used, e.g. automated learning (AutoML)?
- What components, e.g. optimizer, tuner, training, versioning, experiment tracking, publishing, performance evaluating and storing are integrated in the model management tool?
- Are the AI models published (repo, marketplace) for reuse, if so where?
- Is the AI model reused from a repository (repo, marketplace)? If so, which one? How are you notified of updates? How often is the repository checked for updates?
- Do requirements trace to tests?
- If using machine learning, how are the models evaluated?
- How is troubleshooting accomplished? How transparent is the development process?
- How is bias accounted for in the AI process? What assurance is provided that the model (algorithm) is not biased?
- Is one of the mission's goals to be able to understand the AI in terms of inputs and their relationship impacts outcome (prediction)? Is model (implemented or to be implemented) explainable? Interpretable? How so? Are stakeholders used? How?
What is the data governance process?
- Is there data management plan(ning)? Does data planning address metadata for dataflows and data transitions?
- Has the data been identified for current AI investment? For future use AI investment(s)?
- What are the possible constraints or challenges in accessing or incorporating the identified data?
- Are the internal data resources available and accessible?
- For external data resources, have they been sourced with contracts in place to make the data available and accessible?
- Are permissions in place to use the data, with privacy and security restrictions considered and mitigated?
- What is the expected size of the data to be used for training? What is the ratio of observations(rows) to features (columns)?
- What is the quality of the data; skewed, completeness, clean? If there a data management plan, is there a section on data quality?
- How are the dataset(s) used are assured to represent the problem space?
- Is there sufficient amount of data available? If temporal model, does the data have a rich history set? Does the historical data cover periodic and other critical events?
- Does the data have a refresh schedule? Does the data punctual; arrives on time, or ready to be pulled?
- For each data set, has the information been determined to be structured, semi-structured, unstructured?
- Will any data labeling be required? Is the data augmented? Is auto-tagging used? What data augmentation tools are/will be used?
- Have the key features/data attributes to be used in the AI model been identified?
- What is the quality of the data labeling?
- What data/feature exploration/engineering processes and tools are in place or being considered?
- If needed, what are the algorithms used to combine AI features? What is the approximate number of features used?
- What is the process of the removal of features/data that is believed are not relevant?
- Is Master Data Management (MDM) in place? What tools are available or being considered?
- Is data lineage managed?
- What data cataloging capabilities exists today? Future capabilities?
- How are data versions controlled?
- How are the dataset(s) used for AI training, testing and validation managed?
- Are logs kept on which data is used for different executions/training so that the information used is traceable?
- How is the access to the information guaranteed? Are the dataset(s) for AI published (repo, marketplace) for reuse, if so where?
- What data quality checks are in place? What a tool are in place or being considered?
How is the AI investment deployed?
- What foundational capabilities are defined or in place for the AI investment? infrastructure platform, cloud resources? What is the development & implementation strategy?
- What languages & scripting are/will be used? e.g. Python, Javascript, PyTorch
- What Libraries & Frameworks are used?
- Are notebooks used? If so, is Jupyter supported?
- What visualizations are used for development? For AI investment users?
- Will the AI investment leverage Machine Learning as a Service (MLaaS)? Or be offered as a MLaaS?
- Is the AI investment implementing an AIOps / MLOps pipeline/toolchain?
- What tools are used for the AIOps / MLOps? Please identify those on-premises and online services?
- Are the AI languages, libraries, scripting, and AIOps / MLOps applications registered in the organization?
- Are the processes and decisions architecture driven?
- Does the AI investment depict the AIOps / MLOps pipeline/toolchain applications in its architecture, e.g tech stack?
- Does the SecDevOps depict the AI investment in its architecture?
- Is data management reflected in the AIOps / MLOps pipeline/toolchain processes/architecture?
How is production readiness determined?
- Does the team use ML Test Score for production readiness?
- What are the minimum scores for Data, Model, ML Infrastructure, and Monitoring tests?
- What score qualifies to pass into production? What is the rationale for passing if less than exceptional (score of >5)?
- What were the lessons learned? Were adjustments made to move to a higher score? What were the adjustments?
- Who makes the determination when the AI investment is deployed/refreshed?
How are changes identified and managed?
- What capabilities are in place to identify, track, and notify changes?
- How often is the deployed AI process monitored or measures re-evaluated?
- Does the deployed AI investment collecting learning data? If so, how frequently are the algorithms updated?
- What aspects of the AI investment are being monitored, e.g. performance, models, system, data?
- Are the end-to-end visibility and bottleneck risks for AIOps / MLOps pipeline/toolchain reflected in the risk register with mitigation strategy for each risk?
- When the AI model is updated, how is it determined that the performance was indeed increased for the better?
- What capabilities are in place to perceive and address operational environment changes?
- Are response plans, procedures and training in place to address AI attack or failure incidents? How are AI investment’s models audited for security vulnerabilities?
- How is the team notified changes?
- Has role/job displacement due to automation and/or AI implementation being addressed?
- Evaluation - Measures
- AI Governance
- Visualization
- Hyperparameters
- Train, Validate, and Test
- Automated Machine Learning (AML) - AutoML
- Explainable / Interpretable AI
- AI Verification and Validation
- Model Monitoring
- Leadership
- Best Practices
- Return on Investment (ROI)
- Procuring
- ML Test Score
- Cybersecurity: Evaluating & Selling
- Strategy & Tactics
- Checklists
- Automated Scoring
- Risk, Compliance and Regulation
- AIOps / MLOps
- Libraries & Frameworks
- Screening; Passenger, Luggage, & Cargo
- Guidance on the AI auditing framework | Information Commissioner's Office (ICO)
- Technology Readiness Assessments (TRA) Guide | US GAO ...used to evaluate the maturity of technologies and whether they are developed enough to be incorporated into a system without too much risk.
- Cybersecurity Reference and Resource Guide | DOD
- Five ways to evaluate AI systems | Felix Wetzel - Recruiting Daily
- Cyber Security Evaluation Tool (CSET®) ...provides a systematic, disciplined, and repeatable approach for evaluating an organization’s security posture.
- 3 Common Technical Debts in Machine Learning and How to Avoid Them | Derek Chia - Towards Data Science
- Why you should care about debugging machine learning models | Patrick Hall and Andrew Burt - O'reilly
- How to Assess an Artificial Intelligence Product or Solution (Even if You’re Not an AI Expert) | Daniel Faggella - Emerj
Nature of risks inherent to AI applications: We believe that the challenge in governing AI is less about dealing with completely new types of risk and more about existing risks either being harder to identify in an effective and timely manner, given the complexity and speed of AI solutions, or manifesting themselves in unfamiliar ways. As such, firms do not require completely new processes for dealing with AI, but they will need to enhance existing ones to take into account AI and fill the necessary gaps. The likely impact on the level of resources required, as well as on roles and responsibilities, will also need to be addressed. AI and risk management: Innovating with confidence | Deloitte
|
|