{ "id": "2006.01272", "version": "v1", "published": "2020-06-01T21:20:04.000Z", "updated": "2020-06-01T21:20:04.000Z", "title": "Shapley-based explainability on the data manifold", "authors": [ "Christopher Frye", "Damien de Mijolla", "Laurence Cowton", "Megan Stanley", "Ilya Feige" ], "comment": "8 pages, 5 figures, 2 appendices", "categories": [ "cs.LG", "cs.AI", "stat.ML" ], "abstract": "Explainability in machine learning is crucial for iterative model development, compliance with regulation, and providing operational nuance to model predictions. Shapley values provide a general framework for explainability by attributing a model's output prediction to its input features in a mathematically principled and model-agnostic way. However, practical implementations of the Shapley framework make an untenable assumption: that the model's input features are uncorrelated. In this work, we articulate the dangers of this assumption and introduce two solutions for computing Shapley explanations that respect the data manifold. One solution, based on generative modelling, provides flexible access to on-manifold data imputations, while the other directly learns the Shapley value function in a supervised way, providing performance and stability at the cost of flexibility. While the commonly used ``off-manifold'' Shapley values can (i) break symmetries in the data, (ii) give rise to misleading wrong-sign explanations, and (iii) lead to uninterpretable explanations in high-dimensional data, our approach to on-manifold explainability demonstrably overcomes each of these problems.", "revisions": [ { "version": "v1", "updated": "2020-06-01T21:20:04.000Z" } ], "analyses": { "keywords": [ "data manifold", "shapley-based explainability", "shapley value function", "on-manifold explainability demonstrably overcomes", "on-manifold data imputations" ], "note": { "typesetting": "TeX", "pages": 8, "language": "en", "license": "arXiv", "status": "editable" } } }