arXiv Analytics

Sign in

arXiv:2011.09608 [cs.CV]AbstractReferencesReviewsResources

Bidirectional RNN-based Few Shot Learning for 3D Medical Image Segmentation

Soopil Kim, Sion An, Philip Chikontwe, Sang Hyun Park

Published 2020-11-19Version 1

Segmentation of organs of interest in 3D medical images is necessary for accurate diagnosis and longitudinal studies. Though recent advances using deep learning have shown success for many segmentation tasks, large datasets are required for high performance and the annotation process is both time consuming and labor intensive. In this paper, we propose a 3D few shot segmentation framework for accurate organ segmentation using limited training samples of the target organ annotation. To achieve this, a U-Net like network is designed to predict segmentation by learning the relationship between 2D slices of support data and a query image, including a bidirectional gated recurrent unit (GRU) that learns consistency of encoded features between adjacent slices. Also, we introduce a transfer learning method to adapt the characteristics of the target image and organ by updating the model before testing with arbitrary support and query data sampled from the support data. We evaluate our proposed model using three 3D CT datasets with annotations of different organs. Our model yielded significantly improved performance over state-of-the-art few shot segmentation models and was comparable to a fully supervised model trained with more target training data.

Related articles: Most relevant | Search more
arXiv:2302.05615 [cs.CV] (Published 2023-02-11)
Anatomical Invariance Modeling and Semantic Alignment for Self-supervised Learning in 3D Medical Image Segmentation
arXiv:2406.10519 [cs.CV] (Published 2024-06-15)
Self Pre-training with Topology- and Spatiality-aware Masked Autoencoders for 3D Medical Image Segmentation
arXiv:1906.07367 [cs.CV] (Published 2019-06-18)
A sparse annotation strategy based on attention-guided active learning for 3D medical image segmentation