论文标题

手工组织学变压器(H2T):整个幻灯片图像的无监督表示

Handcrafted Histological Transformer (H2T): Unsupervised Representation of Whole Slide Images

论文作者

Vu, Quoc Dang, Rajpoot, Kashif, Raza, Shan E Ahmed, Rajpoot, Nasir

论文摘要

病理诊所中癌症的诊断,预后和治疗性决策现在可以根据对多吉吉像素组织图像的分析(也称为全坡度图像(WSIS))进行。最近,已经提出了深层卷积神经网络(CNN)来得出无监督的WSI表示。这些很有吸引力,因为它们不太依赖于繁琐的专家注释。但是,一个主要的权衡是,较高的预测能力通常以解释性为代价,这对他们的临床用途构成了挑战,在决策过程中通常可以预期透明度。为了应对这一挑战,我们提出了一个基于Deep CNN的手工制作的框架,用于构建整体WSI级表示。在最新发现的基础上,关于变压器在自然语言处理领域的内部工作的基础上,我们将其过程分解为一个更透明的框架,我们称其为手工制作的组织学变压器或H2T。基于我们涉及由总共5,306个WSI组成的各种数据集的实验,结果表明,与最近的最新方法相比,基于H2T的整体WSI-Level表示具有竞争性能,并且可以轻松用于各种下游分析任务。最后,我们的结果表明,H2T框架的最大14倍,比变压器模型快14倍。

Diagnostic, prognostic and therapeutic decision-making of cancer in pathology clinics can now be carried out based on analysis of multi-gigapixel tissue images, also known as whole-slide images (WSIs). Recently, deep convolutional neural networks (CNNs) have been proposed to derive unsupervised WSI representations; these are attractive as they rely less on expert annotation which is cumbersome. However, a major trade-off is that higher predictive power generally comes at the cost of interpretability, posing a challenge to their clinical use where transparency in decision-making is generally expected. To address this challenge, we present a handcrafted framework based on deep CNN for constructing holistic WSI-level representations. Building on recent findings about the internal working of the Transformer in the domain of natural language processing, we break down its processes and handcraft them into a more transparent framework that we term as the Handcrafted Histological Transformer or H2T. Based on our experiments involving various datasets consisting of a total of 5,306 WSIs, the results demonstrate that H2T based holistic WSI-level representations offer competitive performance compared to recent state-of-the-art methods and can be readily utilized for various downstream analysis tasks. Finally, our results demonstrate that the H2T framework can be up to 14 times faster than the Transformer models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源