论文标题

基于双向RNN的3D医疗图像细分的少数射击学习

Bidirectional RNN-based Few Shot Learning for 3D Medical Image Segmentation

论文作者

Kim, Soopil, An, Sion, Chikontwe, Philip, Park, Sang Hyun

论文摘要

对于准确的诊断和纵向研究,必须对3D医学图像的感兴趣器官进行分割。尽管使用深度学习的最新进展显示了许多细分任务的成功,但高性能需要大的数据集,注释过程既耗时又耗时。在本文中,我们建议使用目标器官注释的有限训练样本进行3D射击分割框架,以进行精确的器官分割。为了实现这一目标,诸如U-NET网络旨在通过了解支持数据的2D片与查询图像之间的关系,包括双向封闭式复发单元(GRU),该单元(GRU)了解相邻切片之间编码的特征的一致性。此外,我们引入了一种转移学习方法,以通过在支持数据中采样的任意支持和查询数据进行测试之前对模型进行更新,以适应目标图像和器官的特征。我们使用带有不同器官注释的三个3D CT数据集评估了我们的模型。我们的模型比最先进的射击分段模型显着提高了性能,并且与经过更多目标训练数据训练的完全监督模型相媲美。

Segmentation of organs of interest in 3D medical images is necessary for accurate diagnosis and longitudinal studies. Though recent advances using deep learning have shown success for many segmentation tasks, large datasets are required for high performance and the annotation process is both time consuming and labor intensive. In this paper, we propose a 3D few shot segmentation framework for accurate organ segmentation using limited training samples of the target organ annotation. To achieve this, a U-Net like network is designed to predict segmentation by learning the relationship between 2D slices of support data and a query image, including a bidirectional gated recurrent unit (GRU) that learns consistency of encoded features between adjacent slices. Also, we introduce a transfer learning method to adapt the characteristics of the target image and organ by updating the model before testing with arbitrary support and query data sampled from the support data. We evaluate our proposed model using three 3D CT datasets with annotations of different organs. Our model yielded significantly improved performance over state-of-the-art few shot segmentation models and was comparable to a fully supervised model trained with more target training data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源