Explainable / Interpretable AI
Youtube search... ...Google search
- Tools:
- LIME (Local Interpretable Model-agnostic Explanations) explains the prediction of any classifier
- ELI5 debug machine learning classifiers and explain their predictions & inspect black-box models
- SHAP debug machine learning classifiers and explain their predictions & inspect black-box models
- yellowbrick the visualiser objects, the core interface, are scikit-learn estimators
- MLxtend the visualiser objects, the core interface, scikit-learn estimators
- Risk, Compliance and Regulation
- Python Libraries for Interpretable Machine Learning | Rebecca Vickery - Towards Data Science
- AI Verification and Validation
- Causation vs. Correlation
- Evaluation Measures - Classification Performance
- Journey to Singularity
- Automated Machine Learning (AML) - AutoML
- Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives - IBM
- Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation | Wachter. S, Mittelstadt, B., Florida, L - University of Oxford, 28 Dec 2016
- This is What Happens When Deep Learning Neural Networks Hallucinate | Kimberley Mok
- H2O Machine Learning Interpretability with H2O Driverless AI
- A New Approach to Understanding How Machines Think | John Pavus
- Visualization
- DrWhy | GitHub collection of tools for Explainable AI (XAI)
- Mixed Formal Learning - A Path to Transparent Machine Learning | Sandra Carrico
- Take advantage of open source trusted AI packages in IBM Cloud Pak for Data | Deborah Schalm - DevOps.com
Government
AI AND TRUSTWORTHINESS -- Increasing trust in AI technologies is a key element in accelerating their adoption for economic growth and future innovations that can benefit society. Today, the ability to understand and analyze the decisions of AI systems and measure their trustworthiness is limited. Among the characteristics that relate to trustworthy AI technologies are accuracy, reliability, resiliency, objectivity, security, explainability, safety, and accountability. Ideally, these aspects of AI should be considered early in the design process and tested during the development and use of AI technologies. AI standards and related tools, along with AI risk management strategies, can help to address this limitation and spur innovation. ... It is important for those participating in AI standards development to be aware of, and to act consistently with, U.S. government policies and principles, including those that address societal and ethical issues, governance, and privacy. While there is broad agreement that these issues must factor into AI standards, it is not clear how that should be done and whether there is yet sufficient scientific and technical basis to develop those standards provisions. Plan Outlines Priorities for Federal Agency Engagement in AI Standards Development;
Advances
AI system produces results with an account of the path the system took to derive the solution/prediction - transparency of interpretation, rationale and justification. 'If you have a good causal model of the world you are dealing with, you can generalize even in unfamiliar situations. That’s crucial. We humans are able to project ourselves into situations that are very different from our day-to-day experience. Machines are not, because they don’t have these causal models. We can hand-craft them but that’s not enough. We need machines that can discover causal models. To some extend it’s never going to be perfect. We don’t have a perfect causal model of the reality, that’s why we make a lot of mistakes. But we are much better off at doing this than other animals.' Yoshua Benjio