论文标题
利用不确定性来进行深层解释分类和组织学图像的弱监督分割
Leveraging Uncertainty for Deep Interpretable Classification and Weakly-Supervised Segmentation of Histology Images
论文作者
论文摘要
经过仅使用图像类标签的训练,深度监督的方法可以进行图像分类和ROI分割以解释性。尽管它们在自然图像上取得了成功,但在组织学数据上,ROI在视觉上与背景相似,使模型容易受到高像素的假阳性的影响。这些方法缺乏对明确非歧视区域进行建模的机制,从而提高了假阳性速率。我们提出了新颖的正则化术语,这使该模型能够寻求非歧视性和歧视性区域,同时劝阻不平衡的分割并仅使用图像类标签。我们的方法由两个网络组成:一个产生分割面罩的本地化机构,然后是分类器。培训损失促使本地化机构建立一个分割面具,该面具拥有最歧视的区域,同时对背景区域进行建模。两个组织学数据集的全面实验显示了我们方法在降低误报和准确分割ROI方面的优点。
Trained using only image class label, deep weakly supervised methods allow image classification and ROI segmentation for interpretability. Despite their success on natural images, they face several challenges over histology data where ROI are visually similar to background making models vulnerable to high pixel-wise false positives. These methods lack mechanisms for modeling explicitly non-discriminative regions which raises false-positive rates. We propose novel regularization terms, which enable the model to seek both non-discriminative and discriminative regions, while discouraging unbalanced segmentations and using only image class label. Our method is composed of two networks: a localizer that yields segmentation mask, followed by a classifier. The training loss pushes the localizer to build a segmentation mask that holds most discrimiantive regions while simultaneously modeling background regions. Comprehensive experiments over two histology datasets showed the merits of our method in reducing false positives and accurately segmenting ROI.