arXiv Analytics

Sign in

arXiv:2311.11108 [cs.LG]AbstractReferencesReviewsResources

Auxiliary Losses for Learning Generalizable Concept-based Models

Ivaxi Sheth, Samira Ebrahimi Kahou

Published 2023-11-18Version 1

The increasing use of neural networks in various applications has lead to increasing apprehensions, underscoring the necessity to understand their operations beyond mere final predictions. As a solution to enhance model transparency, Concept Bottleneck Models (CBMs) have gained popularity since their introduction. CBMs essentially limit the latent space of a model to human-understandable high-level concepts. While beneficial, CBMs have been reported to often learn irrelevant concept representations that consecutively damage model performance. To overcome the performance trade-off, we propose cooperative-Concept Bottleneck Model (coop-CBM). The concept representation of our model is particularly meaningful when fine-grained concept labels are absent. Furthermore, we introduce the concept orthogonal loss (COL) to encourage the separation between the concept representations and to reduce the intra-concept distance. This paper presents extensive experiments on real-world datasets for image classification tasks, namely CUB, AwA2, CelebA and TIL. We also study the performance of coop-CBM models under various distributional shift settings. We show that our proposed method achieves higher accuracy in all distributional shift settings even compared to the black-box models with the highest concept accuracy.

Related articles: Most relevant | Search more
arXiv:1612.07307 [cs.LG] (Published 2016-12-21)
Loss is its own Reward: Self-Supervision for Reinforcement Learning
arXiv:2207.00986 [cs.LG] (Published 2022-07-03)
Stabilizing Off-Policy Deep Reinforcement Learning from Pixels
arXiv:1708.06832 [cs.LG] (Published 2017-08-22)
Anytime Neural Networks via Joint Optimization of Auxiliary Losses