Difference between revisions of "Evaluation"
m |
m |
||
Line 96: | Line 96: | ||
** Has [[Data Science#Ground Truth|ground truth]] been defined? Has the source(s) of [[Data Quality#Sourcing Data|data been identified for current AI investment?]] addressing ambiguous data, for future use AI investment(s)? | ** Has [[Data Science#Ground Truth|ground truth]] been defined? Has the source(s) of [[Data Quality#Sourcing Data|data been identified for current AI investment?]] addressing ambiguous data, for future use AI investment(s)? | ||
*** What are the possible constraints or challenges in accessing or incorporating the identified data? | *** What are the possible constraints or challenges in accessing or incorporating the identified data? | ||
− | + | *** Are permissions in place to use the data, with [[privacy]]? | |
− | *** Are permissions in place to use the data, with [[privacy]] | + | *** Are security restrictions considered and mitigated? What data needs protection? How is it protected with remote work? |
*** What is the expected size of the data to be used for training? What is the ratio of observations(rows) to features (columns)? | *** What is the expected size of the data to be used for training? What is the ratio of observations(rows) to features (columns)? | ||
*** How good is the [[Data Quality|quality of the data]]; [[Data Quality#Skewed Data|skewed]], [[Data Quality#Data Completeness|completeness]], duplication, timeliness (vs outdated), [[Data Quality#Data Cleaning|clean]]? If there a data management plan, is there a section on [[Data Quality|data quality?]] | *** How good is the [[Data Quality|quality of the data]]; [[Data Quality#Skewed Data|skewed]], [[Data Quality#Data Completeness|completeness]], duplication, timeliness (vs outdated), [[Data Quality#Data Cleaning|clean]]? If there a data management plan, is there a section on [[Data Quality|data quality?]] | ||
*** How are the [[Datasets|dataset(s)]] used are assured to represent the problem space? | *** How are the [[Datasets|dataset(s)]] used are assured to represent the problem space? | ||
+ | *** How does the (proposed) process eliminate the injection of fake data into the process? | ||
*** What Key Performance Indicators (KPI) can the data potentially drive to achieve key mission objective(s)? What data is missing in order to establish the Key Performance Indicators (KPI)? | *** What Key Performance Indicators (KPI) can the data potentially drive to achieve key mission objective(s)? What data is missing in order to establish the Key Performance Indicators (KPI)? | ||
*** Is there sufficient amount of data available? If temporal model, does the data have a rich history set? Does the historical data cover periodic and other critical events? | *** Is there sufficient amount of data available? If temporal model, does the data have a rich history set? Does the historical data cover periodic and other critical events? |
Revision as of 10:03, 4 June 2022
YouTube search... ...Google search
Contents
- 1 What challenge does the AI investment solve?
- 2 How does the AI meet the challenge?
- 3 Who is providing leadership?
- 4 Are best practices being followed?
- 5 What Laws, Regulations and Policies (LRPs) pertain, e.g. GDPR??
- 6 What portion of the AI is developed inhouse and what is/will be procured?
- 7 How is AI success measured?
- 8 What AI governance is in place?
- 9 What is the algorithm administration strategy?
- 10 How are changes identified and managed?
What challenge does the AI investment solve?
- Has the problem been clearly defined?
- What mission outcome(s) will be benefited by the AI investment, e.g. to increase revenue (marketing), to be more competitive (gain capability), to increase performance (detection, automation, discovery), reduce costs (optimization, predictive maintenance, reduce inventory), time reduction, provide personalization (recommendations), avoid risk of non-compliance, better communication (user interface, natural-language understanding, telecommunications), broader and better integration (Internet of Things (IoT), smart cities), or other outcome(s)?
- Would you classify the AI investment as being evolutionary, revolutionary, or disruptive?
- Was market research performed, what were the results? What similar functionality exists in other solutions where lessons can be applied to the AI investment? Can the hypothesis be tested? Playing devil's advocate, could there be a flaw in the analogical reasoning?
- Have opportunistic AI aspects of the end-to-end mission process(es) been reviewed?
- Was a knowledge-based approach used for the review? If AI was used for optimizing or simulating the process?
- For each aspect how does the AI augment human users?
- Does the business case for the AI investment define clear objectives?
- Whose need(s) is the AI investment addressing?
- Is there a brochure-type version of requirements shared with stakeholders? Is dialog with stakeholders ongoing?
How does the AI meet the challenge?
- What AI is being implemented? Descriptive (what happened?), Diagnostic (why did it happen?), Predictive/Preventive (what could happen?), Prescriptive (what should happen?), Cognitive (what steps should be taken, Cybersecurity?)
- What algorithms are used or are being considered? How was/will the choice selected?
- What learning techniques have/are planned for the AI investment, e.g. Human-in-the-Loop (HITL) Learning?
- How was feasibility determined? Were there AI pilot(s) prior current investment?
Who is providing leadership?
- Is leadership's AI strategy documented and articulated well?
- Does the AI investment strategy align with the organization's overall strategy, culture, and values? Does the organization appreciate experimental processes?
- Is there a time constraint? Does the schedule meet the Technology Readiness Level (TRL) of the AI investment?
- Is the AI investment properly resourced? budgeted, trained staff with key positions filled?
- Responsibility clearly defined and communicated for AI research, performing data science, applied machine intelligence engineering, qualitative assurance, software development, implementing foundational capabilities, user experience, change management, configuration management, security, backup/contingency, domain expertise, and project management
- Of these identified responsibilities which situations are they outsourced? What strategy is incorporated to convey the AI investment knowledge to the organization?
- Is the organization positioned or positioning to scale its current state with AI?
Are best practices being followed?
- Are best practices documented/referenced?
- Is cybersecurity a component of best practices?
- Is the team trained in the best practices, e.g. AI Governance, Data Governance, AIOps?
- What checklists are used?
- Is there a product roadmap?
What Laws, Regulations and Policies (LRPs) pertain, e.g. GDPR??
- Are use cases testable and traceable to requirements, including LRPs?
- When was the last time compliance requirements and regulations were examined? What adjustments were/must be made?
- Does the AI investment require testing by external assessors to ensure compliance and/or auditing requirements?
What portion of the AI is developed inhouse and what is/will be procured?
- If the AI is procured/outsourced, e.g. embedded in sensor product, what items are included in the contract to future proof the solution?
- Contract items to protect organization reuse data rights?
- Does acceptance criteria include a proof of capability?
- How well do a vendor's service/product(s) and/or client references compare with the AI investment objectives?
- How is/was the effort estimated? If procured AI, what factors were used to approximate the needed integration resources?
How is AI success measured?
- What are the significant measures that indicate success? Are tradeoff rationale documented, e.g. accuracy vs speed?
- Are the ways the mission is being measured clear, realistic, and documented? Specifically what are the AI investment's performance measures?
- What is the Return on Investment (ROI)? Is the AI investment on track with original ROI target?
- If there is/was an Analysis of Alternatives how were these measures used? What were the findings?
- What mission metrics will be impacted with the AI investment? What drivers/measures have the most bearing? Of these performance indicators which can be used as leading indicators of the health of the AI investment?
- What are the specific decisions and activities to impact each driver/measure?
- What assumptions are being made? Of these assumptions, what constraints are anticipated?
- Where does the AI investment fit in the portfolio? Are there possible synergies with other aligned efforts in the portfolio? Are there other related AI investments? If so, is this AI investment dependent on the other investment(s)? What investments require this AI investment to be successful? If so, how? Are there mitigation plans in place?
- How would you be able to tell if the AI investment was working properly?
- Have the baseline(s) for model performance been established? What benchmarks are the AI model compared/scored? e.g. Global Vectors for Word Representation (GloVe)
- How perfect does AI have to be to trust it?
- What is the inference/prediction rate performance metric for the AI investment?
- What is the current inference/prediction/ True Positive Rate (TPR)?
- What is the False Positive Rate (FPR)? How does AI reduce false-positives without increasing false negatives?
- Is there a Receiver Operating Characteristic (ROC) curve; plotting the True Positive Rate (TPR) against the False Positive Rate (FPR)?
- Is/will A/B testing or multivariate testing be performed?
What AI governance is in place?
- Does AI Governance implement a risk-based approach, e.g. greater consideration or controls for high risk use cases?
- What are the AI architecture specifics, e.g. Ensemble Learning methods used, Graph Convolutional Network (GCN), Graph Neural Networks (Graph Nets), Geometric Deep Learning, Digital Twin, Decentralized: Federated & Distributed?
- Is the wetware/brain or hardware involved, e.g. Internet of Things (IoT); physical sensors, mobile phones, screening devices, cameras/surveillance, medical instrumentation, robots, autonomous vehicles, drones, quantum computing, assistants/chatbots?
- What learning technique(s) are or will be implemented? If a a transfer process is used, which model(s) and what mission specific dataset(s) are used to tune the AI model?
- What AI algorithms/model type(s) are used? Regression, K-Nearest Neighbors (KNN), Deep Neural Network (DNN), Natural Language Processing (NLP), Association Rule Learning, etc.
- Do requirements trace to tests?
- If using machine learning, how are the models evaluated?
- Has an error analysis been performed to reveal failure scenarios?
- How is troubleshooting accomplished? How transparent is the development process?
- How is bias accounted for in the AI process? What assurance is provided that the model (algorithm) is not biased?
- Is one of the mission's goals to be able to understand the AI in terms of inputs and their relationship impacts outcome (prediction)? Is model (implemented or to be implemented) explainable? Interpretable? How so? Are stakeholders used? How?
- What is the data governance process? How are data silos governed? What data controls and policies are in place today? Planned?
- Is there data management plan(ning)? Does data planning address metadata for dataflows and data transitions?
- Are the internal data resources available and accessible? What processes need to change to best obtain the data?
- For external data resources, have they been sourced with contracts in place to make the data available and accessible?
- Has ground truth been defined? Has the source(s) of data been identified for current AI investment? addressing ambiguous data, for future use AI investment(s)?
- What are the possible constraints or challenges in accessing or incorporating the identified data?
- Are permissions in place to use the data, with privacy?
- Are security restrictions considered and mitigated? What data needs protection? How is it protected with remote work?
- What is the expected size of the data to be used for training? What is the ratio of observations(rows) to features (columns)?
- How good is the quality of the data; skewed, completeness, duplication, timeliness (vs outdated), clean? If there a data management plan, is there a section on data quality?
- How are the dataset(s) used are assured to represent the problem space?
- How does the (proposed) process eliminate the injection of fake data into the process?
- What Key Performance Indicators (KPI) can the data potentially drive to achieve key mission objective(s)? What data is missing in order to establish the Key Performance Indicators (KPI)?
- Is there sufficient amount of data available? If temporal model, does the data have a rich history set? Does the historical data cover periodic and other critical events?
- Does the data have a refresh schedule? Does the data punctual; arrives on time, or ready to be pulled?
- Is there an effort to identify unintended feedback loop(s)?
- For each data content, has the information been determined to be structured, semi-structured, unstructured?
- Will any data labeling be required? Is the data augmented? Is auto-tagging used? What data augmentation tools are/will be used?
- Have the key features/data attributes to be used in the AI model been identified?
- Will the labeling be enabled by merging domain knowledge with ontologies? If so, have concepts and associations been identified?
- How good is the quality of the data labeling? How close is the ground truth to being a gold standard?
- What data/feature exploration/engineering processes and tools are in place or being considered?
- If needed, what are the algorithms used to combine AI features? What is the approximate number of features used?
- What is the process of the removal of features/data that is believed are not relevant?
- What data quality checks are in place? What a tool are in place or being considered?
- Is there data management plan(ning)? Does data planning address metadata for dataflows and data transitions?
What is the algorithm administration strategy?
- What is the deployment vision? What attributes are being used to size the investment, count of users, queries, installations, etc.? What is the Minimum Viable Product (MVP) version of the AI investment that has enough features to satisfy early users and provide feedback for future investment development? If an incremental rollout, how what is the strategy, portion of the users, markets, locations, capabilities?
- What tool(s) are used or will be used for model management?
- How are Hyperparameters managed? What optimizers are used, e.g. automated learning (AutoML)?
- What components, e.g. optimizer, tuner, training, versioning, model dependencies; e.g. training data, dataset(s), historical lineage, publishing, performance evaluations, and model storing are integrated in the model management tool(s)?
- What is the reuse strategy? Is there a single POC for the reuse process/tools?
- Are the AI models published (repo, marketplace) for reuse, if so where?
- Is the AI model reused from a repository (repo, marketplace)? If so, which one(s)?
- Is Master Data Management (MDM) in place? What tools are available or being considered?
- Is data lineage managed?
- What data cataloging capabilities exists today? Future capabilities?
- How are versioning|data versions controlled?
- How are the dataset(s) used for AI training, testing and validation managed?
- Are logs kept on which data is used for different executions/training so that the information used is traceable?
- How is the access to the information guaranteed? Are the dataset(s) for AI published (repo, marketplace) for reuse, if so where?
- What is the development & implementation plan?
- What foundational capabilities are defined or in place for the AI investment? infrastructure platform, cloud resources?
- What languages & scripting are/will be used? e.g. Python, Javascript, PyTorch
- What Libraries & Frameworks are used?
- Are notebooks used? If so, is Jupyter supported?
- What visualizations are used for development? For AI investment user(s)?
- Will the AI investment leverage Machine Learning as a Service (MLaaS)? Or be offered as a MLaaS?
- How is the AI investment deployed?
- What is the plan for model serving? For each use case, is the serving batched or streamed? If applicable, have REST endpoints been defined and exposed?
- Is the AI investment implementing an AIOps pipeline/toolchain?
- What tools are used for the AIOps? Please identify those on-premises and online services?
- Are the AI languages, libraries, scripting, and AIOps applications registered in the organization?
- Are the processes and decisions architecture driven to allow for end-to-end visibility and allow for dependency management? Is information mapped to the intended use to allow analytics and visualizations framed in context?
- Does the AI investment depict the AIOps pipeline/toolchain applications in its architecture, e.g tech stack?
- Does the SecDevOps depict the AI investment in its architecture and how the health metrics are depicted?
- Is algorithm administration reflected in the AIOps pipeline/toolchain processes/architecture?
- How is production readiness determined?
- Does the team use ML Test Score for production readiness?
- What are the minimum scores for Data, Model, ML Infrastructure, and Monitoring tests?
- What score qualifies to pass into production? What is the rationale for passing if less than exceptional (score of >5)?
- What were the lessons learned? Were adjustments made to move to a higher score? What were the adjustments?
- Who makes the determination when the AI investment is deployed/refreshed?
- How does the team ready for cybersecurity? use the MITRE ATT&CK™ Framework? ...use the GSA DevSecOps Guide?
- Does the team use ML Test Score for production readiness?
- What foundational capabilities are defined or in place for the AI investment? infrastructure platform, cloud resources?
How are changes identified and managed?
- What capabilities are in place to identify, track, and notify changes?
- How often is the deployed AI process monitored or measures re-evaluated?
- Does the deployed AI investment collecting learning data? If so, how frequently are the algorithms updated?
- What aspects of the AI investment are being monitored, e.g. performance, model functionality, system, data (pipeline)?
- If the AI model reused from a repository (repo, marketplace), how is the team notified of updates? How often is the repository checked for updates?
- Are the end-to-end visibility and bottleneck risks for AIOps pipeline/toolchain reflected in the risk register with mitigation strategy for each risk? Do mitigations tend to address symptoms only, or do the mitigations lead to improving root cause via analysis?
- When the AI model is updated, how is it determined that the performance was indeed increased for the better?
- What capabilities are in place to perceive, notify, and address operational environment changes? Detect and remediate drift, when the AI degrades over time due to data and the model is no longer effective in the environment?
- Is there a mechanism (automated, assisted, or manual) to provide change/event causation? Does the mechanism use AI, e.g. anomaly detection?
- Are response plans, procedures and training in place to address AI attack or failure incidents? How are AI investment’s models audited for security vulnerabilities?
- How is the team notified changes? Active flagging/messaging (push) and passive health dashboard (polling)? How does the team use the end-to-end information to optimize the organization's resources and process/service(s)?
- Has role/job displacement due to automation and/or AI implementation being addressed?
- Evaluation - Measures
- AI Governance / Algorithm Administration
- AI Verification and Validation
- Leadership
- Best Practices
- Return on Investment (ROI)
- Procuring
- ML Test Score
- Cybersecurity: Evaluating & Selling
- Strategy & Tactics
- Checklists
- Automated Scoring
- Risk, Compliance and Regulation
- Screening; Passenger, Luggage, & Cargo
- Guidance on the AI auditing framework | Information Commissioner's Office (ICO)
- Technology Readiness Assessments (TRA) Guide | US GAO ...used to evaluate the maturity of technologies and whether they are developed enough to be incorporated into a system without too much risk.
- Cybersecurity Reference and Resource Guide | DOD
- Joint Capabilities Integration and Development System (JCIDS) | DOD
- Five ways to evaluate AI systems | Felix Wetzel - Recruiting Daily
- Cyber Security Evaluation Tool (CSET®) ...provides a systematic, disciplined, and repeatable approach for evaluating an organization’s security posture.
- 3 Common Technical Debts in Machine Learning and How to Avoid Them | Derek Chia - Towards Data Science
- Why you should care about debugging machine learning models | Patrick Hall and Andrew Burt - O'reilly
- How to Assess an Artificial Intelligence Product or Solution (Even if You’re Not an AI Expert) | Daniel Faggella - Emerj
Nature of risks inherent to AI applications: We believe that the challenge in governing AI is less about dealing with completely new types of risk and more about existing risks either being harder to identify in an effective and timely manner, given the complexity and speed of AI solutions, or manifesting themselves in unfamiliar ways. As such, firms do not require completely new processes for dealing with AI, but they will need to enhance existing ones to take into account AI and fill the necessary gaps. The likely impact on the level of resources required, as well as on roles and responsibilities, will also need to be addressed. AI and risk management: Innovating with confidence | Deloitte
|
|