arXiv Analytics

Sign in

arXiv:2501.01441 [cs.HC]AbstractReferencesReviewsResources

Explanatory Debiasing: Involving Domain Experts in the Data Generation Process to Mitigate Representation Bias in AI Systems

Aditya Bhattacharya, Simone Stumpf, Robin De Croon, Katrien Verbert

Published 2024-12-26Version 1

Representation bias is one of the most common types of biases in artificial intelligence (AI) systems, causing AI models to perform poorly on underrepresented data segments. Although AI practitioners use various methods to reduce representation bias, their effectiveness is often constrained by insufficient domain knowledge in the debiasing process. To address this gap, this paper introduces a set of generic design guidelines for effectively involving domain experts in representation debiasing. We instantiated our proposed guidelines in a healthcare-focused application and evaluated them through a comprehensive mixed-methods user study with 35 healthcare experts. Our findings show that involving domain experts can reduce representation bias without compromising model accuracy. Based on our findings, we also offer recommendations for developers to build robust debiasing systems guided by our generic design guidelines, ensuring more effective inclusion of domain experts in the debiasing process.

Comments: Pre-print version, please cite the main article instead of the pre-print version
Categories: cs.HC, cs.AI
Related articles: Most relevant | Search more
arXiv:2007.15048 [cs.HC] (Published 2020-07-29)
The BIRAFFE2 Experiment. Study in Bio-Reactions and Faces for Emotion-based Personalization for AI Systems
arXiv:2003.01525 [cs.HC] (Published 2020-03-03)
Evidence-based explanation to promote fairness in AI systems
arXiv:2212.06823 [cs.HC] (Published 2022-12-13)
Explanations Can Reduce Overreliance on AI Systems During Decision-Making