{ "id": "2106.12563", "version": "v1", "published": "2021-06-23T17:43:31.000Z", "updated": "2021-06-23T17:43:31.000Z", "title": "Feature Attributions and Counterfactual Explanations Can Be Manipulated", "authors": [ "Dylan Slack", "Sophie Hilgard", "Sameer Singh", "Hima Lakkaraju" ], "comment": "arXiv admin note: text overlap with arXiv:2106.02666", "categories": [ "cs.LG", "cs.CR" ], "abstract": "As machine learning models are increasingly used in critical decision-making settings (e.g., healthcare, finance), there has been a growing emphasis on developing methods to explain model predictions. Such \\textit{explanations} are used to understand and establish trust in models and are vital components in machine learning pipelines. Though explanations are a critical piece in these systems, there is little understanding about how they are vulnerable to manipulation by adversaries. In this paper, we discuss how two broad classes of explanations are vulnerable to manipulation. We demonstrate how adversaries can design biased models that manipulate model agnostic feature attribution methods (e.g., LIME \\& SHAP) and counterfactual explanations that hill-climb during the counterfactual search (e.g., Wachter's Algorithm \\& DiCE) into \\textit{concealing} the model's biases. These vulnerabilities allow an adversary to deploy a biased model, yet explanations will not reveal this bias, thereby deceiving stakeholders into trusting the model. We evaluate the manipulations on real world data sets, including COMPAS and Communities \\& Crime, and find explanations can be manipulated in practice.", "revisions": [ { "version": "v1", "updated": "2021-06-23T17:43:31.000Z" } ], "analyses": { "keywords": [ "counterfactual explanations", "model agnostic feature attribution methods", "manipulate model agnostic feature attribution", "real world data sets", "explain model predictions" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }