论文标题

使用缩放的细心网络将解剖标记定位在眼部图像中

Localizing Anatomical Landmarks in Ocular Images using Zoom-In Attentive Networks

论文作者

Lei, Xiaofeng, Li, Shaohua, Xu, Xinxing, Fu, Huazhu, Liu, Yong, Tham, Yih-Chung, Feng, Yangqin, Tan, Mingrui, Xu, Yanyu, Goh, Jocelyn Hui Lin, Goh, Rick Siow Mong, Cheng, Ching-Yu

论文摘要

在医学图像分析中,定位解剖标志是重要的任务。但是,局部化的地标通常缺乏显着的视觉特征。他们的位置难以捉摸,很容易与背景混淆,因此,精确的本地化很大程度上取决于周围地区形成的上下文。另外,所需的精度通常高于分割和对象检测任务。因此,本地化的独特挑战与分割或检测不同。在本文中,我们提出了一个缩放细心网络(Zian),以在眼图中进行解剖学地标定位。首先,利用了粗到最细或“缩放”策略来学习不同尺度的上下文化功能。然后,采用一个细心的融合模块来汇总多尺度特征,该功能由1)与多个利益区域(ROIS)方案的共同注意网络组成,该方案从多重ROI中学习互补功能,2)基于注意力的融合模块,该模块集成了多Rois功能和非ROI特征和非ROI特征。我们在两项开放挑战任务上评估了Zian,即,在AS-OCT图像中,Feldus图像中的中央凹定位和巩膜刺激性位置。实验表明,齐安(Zian)实现了有希望的表现,并且表现优于最先进的定位方法。 Zian的源代码和训练有素的型号可在https://github.com/leixiaofeng-astar/omia9-zian中找到。

Localizing anatomical landmarks are important tasks in medical image analysis. However, the landmarks to be localized often lack prominent visual features. Their locations are elusive and easily confused with the background, and thus precise localization highly depends on the context formed by their surrounding areas. In addition, the required precision is usually higher than segmentation and object detection tasks. Therefore, localization has its unique challenges different from segmentation or detection. In this paper, we propose a zoom-in attentive network (ZIAN) for anatomical landmark localization in ocular images. First, a coarse-to-fine, or "zoom-in" strategy is utilized to learn the contextualized features in different scales. Then, an attentive fusion module is adopted to aggregate multi-scale features, which consists of 1) a co-attention network with a multiple regions-of-interest (ROIs) scheme that learns complementary features from the multiple ROIs, 2) an attention-based fusion module which integrates the multi-ROIs features and non-ROI features. We evaluated ZIAN on two open challenge tasks, i.e., the fovea localization in fundus images and scleral spur localization in AS-OCT images. Experiments show that ZIAN achieves promising performances and outperforms state-of-the-art localization methods. The source code and trained models of ZIAN are available at https://github.com/leixiaofeng-astar/OMIA9-ZIAN.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源