Explainable / Interpretable AI

From
Jump to: navigation, search

YouTube ... Quora ...Google search ...Google News ...Bing News


Explainable Artificial Intelligence (XAI)

AI system produces results with an account of the path the system took to derive the solution/prediction - transparency of interpretation, rationale and justification. 'If you have a good causal model of the world you are dealing with, you can generalize even in unfamiliar situations. That’s crucial. We humans are able to project ourselves into situations that are very different from our day-to-day experience. Machines are not, because they don’t have these causal models. We can hand-craft them but that’s not enough. We need machines that can discover causal models. To some extend it’s never going to be perfect. We don’t have a perfect causal model of the reality, that’s why we make a lot of mistakes. But we are much better off at doing this than other animals.' Yoshua Benjio

Progress made with XAI:

  • Explainable AI Techniques: There are now off-the-shelf explainable AI techniques that developers can use to incorporate explainable AI techniques into their workflows as part of their modeling operations. These techniques help to disclose the program's strengths and weaknesses, the specific criteria the program uses to arrive at a decision, and why a program makes a particular decision, as opposed to alternatives.
  • Model Explainability: is essential for high-stakes domains such as healthcare, finance, the legal system, and other critical industrial sectors. Explainable AI (XAI) is a subfield of AI that aims to develop AI systems that can provide clear and understandable explanations of their decision-making processes to humans. The goal of XAI is to make AI more transparent, trustworthy, responsible, and ethical
  • Concept-Based Explanations: There has been progress in using concept-based explanations to explain deep neural networks. TCAV (Testing with Concept Activation Vectors) is a technique developed by Google AI that uses concept-based explanations to explain deep neural networks. This technique helps to make AI more transparent, trustworthy, responsible, and ethical.
  • Interpretable and Inclusive AI: There has been progress in building interpretable and inclusive AI systems from the ground up with tools designed to help detect and resolve bias, drift, and other gaps in data and models. AI Explanations in AutoML Tables, Vertex AI Predictions, and Notebooks provide data scientists with the insight needed to improve datasets or model architecture and debug model performance.

Explainable Computer Vision with Grad-CAM

...try demonstrations

We propose a technique for producing "visual explanations" for decisions from a large class of CNN-based models, making them more transparent. Our approach - Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting important regions in the image for predicting the concept. Grad-CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers, (2) CNNs used for structured outputs, (3) CNNs used in tasks with multimodal inputs or reinforcement learning, without any architectural changes or re-training. We combine Grad-CAM with fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to off-the-shelf image classification, captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into their failure modes, (b) are robust to adversarial images, (c) outperform previous methods on localization, (d) are more faithful to the underlying model and (e) help achieve generalization by identifying dataset bias. For captioning and VQA, we show that even non-attention based models can localize inputs. We devise a way to identify important neurons through Grad-CAM and combine it with neuron names to provide textual explanations for model decisions. Finally, we design and conduct human studies to measure if Grad-CAM helps users establish appropriate trust in predictions from models and show that Grad-CAM helps untrained users successfully discern a 'stronger' nodel from a 'weaker' one even when both make identical predictions. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization | R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra

Building powerful Computer Vision-based apps without deep expertise has become possible for more people due to easily accessible tools like Python, Colab, Keras, PyTorch, and Tensorflow. But why does a computer classify an image the way that it does? This is a question that is critical when it comes to AI applied to diagnostics, driving, or any other form of critical decision making. In this episode, I'd like to raise awareness around one technique in particular that I found called "Grad-Cam" or Gradient Class Activation Mappings. It allows you to generate a heatmap that helps detail what your model thinks the most relevant features in an image are that cause it to make its predictions. I'll be explaining the math behind it and demoing a code sample by fairyonice to help you understand it. I hope that after this video, you'll be able to implement it in your own project. Enjoy! | Siraj Raval

cat_dog.png cat_dog_242_gradcam.jpg cat_dog_242_guided_gradcam.jpg

Interpretable

Youtube search... ...Google search

Please Stop Doing "Explainable" ML: There has been an increasing trend in healthcare and criminal justice to leverage machine learning (ML) for high-stakes prediction applications that deeply impact human lives. Many of the ML models are black boxes that do not explain their predictions in a way that humans can understand. The lack of transparency and accountability of predictive models can have (and has already had) severe consequences; there have been cases of people incorrectly denied parole, poor bail decisions leading to the release of dangerous criminals, ML-based pollution models stating that highly polluted air was safe to breathe, and generally poor use of limited valuable resources in criminal justice, medicine, energy reliability, finance, and in other domains. Rather than trying to create models that are inherently interpretable, there has been a recent explosion of work on “Explainable ML,” where a second (posthoc) model is created to explain the first black box model. This is problematic. Explanations are often not reliable, and can be misleading, as we discuss below. If we instead use models that are inherently interpretable, they provide their own explanations, which are faithful to what the model actually computes. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead | Cynthia Rudin - Duke University

Accuracy & Interpretability Trade-Off

Youtube search... ...Google search

Model Prediction Accuracy and Model Interpretability Trade Off
In this video, we discuss the effects of choosing to model for predictability or for interpretability via this trade-off between prediction accuracy versus interpretability which are largely affected by the model's flexibility. We look at different models and how where they are plotted on a Model Interpretability-versus-Flexibility axes.

Trust

YouTube search... ...Google search



Measurement and Transparency are key to trusted AI



How can you trust artificial intelligence and machine learning systems?
Raj Ramesh Trust is always a big challenge with AI systems because we don’t often know how to interpret what is going on “in its head,” if you will. This is an active area of research, including the challenge of being able to explain how an AI system comes to the conclusion that it did. In this video, I simplify the concepts of trust so a business manager can understand how to look consider and build trusted AI within the context of their environment.

Demo: Trust and Transparency for AI on the IBM Cloud
See a demo of the new trust and transparency features for AI being made available in IBM Cloud. Explore the main features of the tooling using examples based on fraud detection and loan approval workflows. Learn more at https://ibm.co/2xmpQFM

Modeling the Interplay of Trust and Attention in HRI: an Autonomous Vehicle Study
Indu P. Bodala, Bing Cai Kok, Weicong Sng, Harold Soh HRI'20: ACM/IEEE International Conference on Human-Robot Interaction Session: Late Breaking Reports In this work, we study and model how two factors of human cognition, trust and attention, affect the way humans interact with autonomous vehicles. We develop a probabilistic model that succinctly captures how trust and attention evolve across time to drive behavior, and present results from a human-subjects experiment where participants interacted with a simulated autonomous vehicle while engaging with a secondary task. Our main findings suggest that trust affects attention, which in turn affects the human's decision to intervene with the autonomous vehicle. DOI:: https://doi.org/10.1145/3371382.3378262 WEB:: https://humanrobotinteraction.org/2020/ Companion program for the ACM/IEEE International Conference on Human-Robot Interaction 2020

Juiced and Ready to Predict Private Information in Deep Cooperative Reinforcement Learning
Eugene Lim, Bing Cai Kok, Songli Wang, Joshua Lee, Harold Soh HRI'20: ACM/IEEE International Conference on Human-Robot Interaction In human-robot collaboration settings, each agent often has access to private information (PI) that is unavailable to others. Examples include task preferences, objectives, and beliefs. Here, we focus on the human-robot dyadic scenarios where the human has private information, but is unable to directly convey it to the robot. We present Q-Network with Private Information and Cooperation (Q-PICo), a method for training robots that can interactively assist humans with PI. In contrast to existing approaches, we explicitly model PI prediction, leading to a more interpretable network architecture. We also contribute Juiced, an environment inspired by the popular video game Overcooked, to test Q-PICo and other related methods for human-robot collaboration. Our initial experiments in Juiced show that the agents trained with Q-PICo can accurately predict PI and exhibit collaborative behavior. DOI: https://doi.org/10.1145/3371382.3378308 WEB:: http4s://humanrobotinteraction.org/2020/ Companion program for the ACM/IEEE International Conference on Human-Robot Interaction 2020

Building trust in AI, the IBM way | ZDNet
Aleksandra Mojsilovic, head of AI foundations at IBM Research, co-director of IBM science for social good, and IBM fellow, tells Tonya Hall about IBM's bias-busting toolbox.

EY platform uses Fairlearn to help customers gain trust in AI
One of the biggest barriers to adoption of AI is lack of trust. Professional services firm EY is committed to providing the frameworks and tools that organizations need to support and monitor the responsible application of AI. This helps the organizations better understand their customers, identify fraud and security breaches sooner, and make fairer loan decisions faster. The EY Trusted AI Platform, which uses Microsoft Azure Machine Learning capabilities to assess and mitigate unfairness in machine learning models, helps customers—and their regulators—develop confidence in AI and machine learning.

Should you trust what AI says? | Elisa Celis | TEDxProvidence
Yale Professor Elisa Celis worked to create AI technology to better the world, only to find out that it has a problem. A big one. AI that is designed to serve all of us, in fact, excludes most of us. Learn why this happens, what can be fixed, and if that is really enough. Elisa Celis is an Assistant Professor of Statistics and Data Science at Yale University. Elisa’s research focuses on problems that arise at the interface of computation and machine learning and its societal ramifications. Specifically, she studies the manifestation of social and economic biases in our online lives via the algorithms that encode and perpetuate them. Her work spans multiple areas including social computing and crowd-sourcing, data science, and algorithm design with a current emphasis on fairness and diversity in artificial intelligence and machine learning. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

How can we design AI that we trust? | Fang Chen | TEDxSydney
According to artificial intelligence professional, Dr Fang Chen, "the continual use of technology hinges upon human trust. It’s one of the main roadblocks to overcome". In this fascinating talk, she explains the framework she has developed to help humans make informed decisions when it comes to our adoption of AI - it’s all about trust. Dr. Fang Chen is Senior Principal Researcher at Data61, CSIRO, and a thought leader in AI and Human-Machine Interaction. She has pioneered the theoretical framework of human behaviour understanding for building human-machine trust. With her work in AI solutions, she has also become an expert in the factors influencing technology uptake: most notably human perception. Dr Chen has contributed to more than 250 publications and 30 patents in eight countries. She holds a professorship at the University of New South Wales and the University of Sydney. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

Domains

Government

AI AND TRUSTWORTHINESS -- Increasing trust in AI technologies is a key element in accelerating their adoption for economic growth and future innovations that can benefit society. Today, the ability to understand and analyze the decisions of AI systems and measure their trustworthiness is limited. Among the characteristics that relate to trustworthy AI technologies are accuracy, reliability, resiliency, objectivity, security, explainability, safety, and accountability. Ideally, these aspects of AI should be considered early in the design process and tested during the development and use of AI technologies. AI standards and related tools, along with AI risk management strategies, can help to address this limitation and spur innovation. ... It is important for those participating in AI standards development to be aware of, and to act consistently with, U.S. government policies and principles, including those that address societal and ethical issues, governance, and privacy. While there is broad agreement that these issues must factor into AI standards, it is not clear how that should be done and whether there is yet sufficient scientific and technical basis to develop those standards provisions. Plan Outlines Priorities for Federal Agency Engagement in AI Standards Development;


Healthcare

Interpretable Machine Learning for Healthcare