论文标题
使用特定标签的眼睛跟踪注释对胸部X射线分类器的本地化监督
Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation
论文作者
论文摘要
卷积神经网络(CNN)已成功应用于胸部X射线(CXR)图像。此外,已证明注释的边界框可以改善CNN的可解释性,以定位异常。但是,只有几个相对较小的CXR数据集可用,并且收集它们非常昂贵。在放射科医生的临床工作流程期间,可以计时地,可以以非侵入性的方式收集眼睛跟踪(ET)数据。我们使用从放射科医生录制的ET数据,同时将CXR报告用于培训CNN。我们通过将它们与关键字的命令相关联,并使用它们来监督特定异常的定位,从而从ET数据中提取摘要。我们表明,此方法可以改善模型的解释性,而不会影响其图像级分类。
Convolutional neural networks (CNNs) have been successfully applied to chest x-ray (CXR) images. Moreover, annotated bounding boxes have been shown to improve the interpretability of a CNN in terms of localizing abnormalities. However, only a few relatively small CXR datasets containing bounding boxes are available, and collecting them is very costly. Opportunely, eye-tracking (ET) data can be collected in a non-intrusive way during the clinical workflow of a radiologist. We use ET data recorded from radiologists while dictating CXR reports to train CNNs. We extract snippets from the ET data by associating them with the dictation of keywords and use them to supervise the localization of specific abnormalities. We show that this method improves a model's interpretability without impacting its image-level classification.