arXiv Analytics

Sign in

arXiv:2209.15123 [cs.LG]AbstractReferencesReviewsResources

Understanding Interventional TreeSHAP : How and Why it Works

Gabriel Laberge, Yann Pequignot

Published 2022-09-29Version 1

Shapley values are ubiquitous in interpretable Machine Learning due to their strong theoretical background and efficient implementation in the SHAP library. Computing these values used to induce an exponential cost with respect to the number of input features of an opaque model. Now, with efficient implementations such as Interventional TreeSHAP, this exponential burden is alleviated assuming one is explaining ensembles of decision trees. Although Interventional TreeSHAP has risen in popularity, it still lacks a formal proof of how/why it works. We provide such proof with the aim of not only increasing the transparency of the algorithm but also to encourage further development of these ideas. Notably, our proof for Interventional TreeSHAP is easily adapted to Shapley-Taylor indices.

Related articles: Most relevant | Search more
arXiv:2411.00365 [cs.LG] (Published 2024-11-01)
ROSS:RObust decentralized Stochastic learning based on Shapley values
arXiv:2401.09756 [cs.LG] (Published 2024-01-18)
Explaining Drift using Shapley Values
arXiv:2104.01303 [cs.LG] (Published 2021-04-03)
Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation