论文标题

重点解码可实现变压器的3D解剖检测

Focused Decoding Enables 3D Anatomical Detection by Transformers

论文作者

Wittmann, Bastian, Navarro, Fernando, Shit, Suprosanna, Menze, Bjoern

论文摘要

检测变压器代表基于变压器编码器架构的端到端对象检测方法,从而利用了注意机制进行全局关系建模。尽管检测变压器的结果与在2D自然图像上运行的高度优化的基于CNN的对应物相当甚至优于其结果,但它们的成功与获取大量培训数据密切相关。但是,这限制了在医疗领域中使用检测变压器的可行性,因为访问注释数据通常受到限制。为了解决这个问题并促进医疗检测变压器的出现,我们提出了一种新颖的检测变压器,用于3D解剖结构检测,称为聚焦解码器。集中的解码器利用从解剖区域地图集的信息同时部署查询锚点,并将跨注意的视野限制为感兴趣的区域,这可以精确地关注相关的解剖结构。我们在两个公开可用的CT数据集上评估了我们提出的方法,并证明了专注的解码器不仅提供了强大的检测结果,从而减轻了对大量注释数据的需求,而且还表现出了通过注意力体重来表现出异常且高度直观的解释性。我们的代码可从https://github.com/bwittmann/transoar获得。

Detection Transformers represent end-to-end object detection approaches based on a Transformer encoder-decoder architecture, exploiting the attention mechanism for global relation modeling. Although Detection Transformers deliver results on par with or even superior to their highly optimized CNN-based counterparts operating on 2D natural images, their success is closely coupled to access to a vast amount of training data. This, however, restricts the feasibility of employing Detection Transformers in the medical domain, as access to annotated data is typically limited. To tackle this issue and facilitate the advent of medical Detection Transformers, we propose a novel Detection Transformer for 3D anatomical structure detection, dubbed Focused Decoder. Focused Decoder leverages information from an anatomical region atlas to simultaneously deploy query anchors and restrict the cross-attention's field of view to regions of interest, which allows for a precise focus on relevant anatomical structures. We evaluate our proposed approach on two publicly available CT datasets and demonstrate that Focused Decoder not only provides strong detection results and thus alleviates the need for a vast amount of annotated data but also exhibits exceptional and highly intuitive explainability of results via attention weights. Our code is available at https://github.com/bwittmann/transoar.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源