Explainable Artificial Intelligence (XAI)

From
Jump to: navigation, search

Youtube search... ...Google search

  • Tools:
    • LIME (Local Interpretable Model-agnostic Explanations) explains the prediction of any classifier
    • ELI5 debug machine learning classifiers and explain their predictions & inspect black-box models
    • SHAP debug machine learning classifiers and explain their predictions & inspect black-box models
    • yellowbrick the visualiser objects, the core interface, are scikit-learn estimators
    • MLxtend the visualiser objects, the core interface, scikit-learn estimators


Government

AI AND TRUSTWORTHINESS -- Increasing trust in AI technologies is a key element in accelerating their adoption for economic growth and future innovations that can benefit society. Today, the ability to understand and analyze the decisions of AI systems and measure their trustworthiness is limited. Among the characteristics that relate to trustworthy AI technologies are accuracy, reliability, resiliency, objectivity, security, explainability, safety, and accountability. Ideally, these aspects of AI should be considered early in the design process and tested during the development and use of AI technologies. AI standards and related tools, along with AI risk management strategies, can help to address this limitation and spur innovation. ... It is important for those participating in AI standards development to be aware of, and to act consistently with, U.S. government policies and principles, including those that address societal and ethical issues, governance, and privacy. While there is broad agreement that these issues must factor into AI standards, it is not clear how that should be done and whether there is yet sufficient scientific and technical basis to develop those standards provisions. Plan Outlines Priorities for Federal Agency Engagement in AI Standards Development;

Advances

AI system produces results with an account of the path the system took to derive the solution/prediction - transparency of interpretation, rationale and justification. 'If you have a good causal model of the world you are dealing with, you can generalize even in unfamiliar situations. That’s crucial. We humans are able to project ourselves into situations that are very different from our day-to-day experience. Machines are not, because they don’t have these causal models. We can hand-craft them but that’s not enough. We need machines that can discover causal models. To some extend it’s never going to be perfect. We don’t have a perfect causal model of the reality, that’s why we make a lot of mistakes. But we are much better off at doing this than other animals.' Yoshua Benjio