{ "id": "2206.07737", "version": "v1", "published": "2022-06-15T18:06:45.000Z", "updated": "2022-06-15T18:06:45.000Z", "title": "Disparate Impact in Differential Privacy from Gradient Misalignment", "authors": [ "Maria S. Esipova", "Atiyeh Ashari Ghomi", "Yaqiao Luo", "Jesse C. Cresswell" ], "comment": "Accepted as a ICML workshop paper at TPDP 2022", "categories": [ "cs.LG", "cs.AI", "cs.CR" ], "abstract": "As machine learning becomes more widespread throughout society, aspects including data privacy and fairness must be carefully considered, and are crucial for deployment in highly regulated industries. Unfortunately, the application of privacy enhancing technologies can worsen unfair tendencies in models. In particular, one of the most widely used techniques for private model training, differentially private stochastic gradient descent (DPSGD), frequently intensifies disparate impact on groups within data. In this work we study the fine-grained causes of unfairness in DPSGD and identify gradient misalignment due to inequitable gradient clipping as the most significant source. This observation leads us to a new method for reducing unfairness by preventing gradient misalignment in DPSGD.", "revisions": [ { "version": "v1", "updated": "2022-06-15T18:06:45.000Z" } ], "analyses": { "keywords": [ "differential privacy", "differentially private stochastic gradient descent", "worsen unfair tendencies", "frequently intensifies disparate impact", "widespread throughout society" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }