Difference between revisions of "Explainable / Interpretable AI"
| Line 7: | Line 7: | ||
* [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2903469 Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation | Wachter. S, Mittelstadt, B., Florida, L - University of Oxford], 28 Dec 2016 | * [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2903469 Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation | Wachter. S, Mittelstadt, B., Florida, L - University of Oxford], 28 Dec 2016 | ||
* [http://thenewstack.io/deep-learning-neural-networks-google-deep-dream/ This is What Happens When Deep Learning Neural Networks Hallucinate | Kimberley Mok] | * [http://thenewstack.io/deep-learning-neural-networks-google-deep-dream/ This is What Happens When Deep Learning Neural Networks Hallucinate | Kimberley Mok] | ||
| + | * [http://docs.h2o.ai/driverless-ai/latest-stable/docs/booklets/MLIBooklet.pdf H2O Machine Learning Interpretability with H2O Driverless AI] | ||
| + | |||
AI system produces results with an account of the path the system took to derive the solution/prediction - transparency of interpretation, rationale and justification | AI system produces results with an account of the path the system took to derive the solution/prediction - transparency of interpretation, rationale and justification | ||
| Line 25: | Line 27: | ||
<youtube>MMxZlr_L6YE</youtube> | <youtube>MMxZlr_L6YE</youtube> | ||
<youtube>gB_-LabED68</youtube> | <youtube>gB_-LabED68</youtube> | ||
| + | <youtube>Os9ZcT4-lzk</youtube> | ||
Revision as of 00:43, 28 September 2018
- AI Verification and Validation
- Journey to Singularity
- Self Learning Artificial Intelligence - AutoML & World Models
- Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives - IBM
- Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation | Wachter. S, Mittelstadt, B., Florida, L - University of Oxford, 28 Dec 2016
- This is What Happens When Deep Learning Neural Networks Hallucinate | Kimberley Mok
- H2O Machine Learning Interpretability with H2O Driverless AI
AI system produces results with an account of the path the system took to derive the solution/prediction - transparency of interpretation, rationale and justification