arXiv Analytics

Sign in

arXiv:2404.10717 [cs.CV]AbstractReferencesReviewsResources

Mixed Prototype Consistency Learning for Semi-supervised Medical Image Segmentation

Lijian Li

Published 2024-04-16Version 1

Recently, prototype learning has emerged in semi-supervised medical image segmentation and achieved remarkable performance. However, the scarcity of labeled data limits the expressiveness of prototypes in previous methods, potentially hindering the complete representation of prototypes for class embedding. To address this problem, we propose the Mixed Prototype Consistency Learning (MPCL) framework, which includes a Mean Teacher and an auxiliary network. The Mean Teacher generates prototypes for labeled and unlabeled data, while the auxiliary network produces additional prototypes for mixed data processed by CutMix. Through prototype fusion, mixed prototypes provide extra semantic information to both labeled and unlabeled prototypes. High-quality global prototypes for each class are formed by fusing two enhanced prototypes, optimizing the distribution of hidden embeddings used in consistency learning. Extensive experiments on the left atrium and type B aortic dissection datasets demonstrate MPCL's superiority over previous state-of-the-art approaches, confirming the effectiveness of our framework. The code will be released soon.

Related articles: Most relevant | Search more
arXiv:2109.09960 [cs.CV] (Published 2021-09-21)
Enforcing Mutual Consistency of Hard Regions for Semi-supervised Medical Image Segmentation
arXiv:2409.07793 [cs.CV] (Published 2024-09-12)
Lagrange Duality and Compound Multi-Attention Transformer for Semi-Supervised Medical Image Segmentation
Fuchen Zheng et al.
arXiv:2306.14293 [cs.CV] (Published 2023-06-25)
Multi-Scale Cross Contrastive Learning for Semi-Supervised Medical Image Segmentation