Difference between revisions of "Explainable / Interpretable AI"
| Line 3: | Line 3: | ||
* [[Journey to Singularity]] | * [[Journey to Singularity]] | ||
* [http://arxiv.org/abs/1802.07623 Explanations] based on the Missing: Towards Contrastive Explanations with Pertinent Negatives - IBM | * [http://arxiv.org/abs/1802.07623 Explanations] based on the Missing: Towards Contrastive Explanations with Pertinent Negatives - IBM | ||
| + | * [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2903469 Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation | Wachter. S, Mittelstadt, B., Florida, L - University of Oxford], 28 Dec 2016 | ||
| + | * [http://thenewstack.io/deep-learning-neural-networks-google-deep-dream/ This is What Happens When Deep Learning Neural Networks Hallucinate | Kimberley Mok] | ||
AI system produces results with an account of the path the system took to derive the solution/prediction - transparency of interpretation, rationale and justification | AI system produces results with an account of the path the system took to derive the solution/prediction - transparency of interpretation, rationale and justification | ||
Revision as of 12:58, 30 June 2018
- Journey to Singularity
- Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives - IBM
- Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation | Wachter. S, Mittelstadt, B., Florida, L - University of Oxford, 28 Dec 2016
- This is What Happens When Deep Learning Neural Networks Hallucinate | Kimberley Mok
AI system produces results with an account of the path the system took to derive the solution/prediction - transparency of interpretation, rationale and justification