论文标题
部分可观测时空混沌系统的无模型预测
Graph Enhanced Contrastive Learning for Radiology Findings Summarization
论文作者
论文摘要
放射学报告的印象部分总结了调查结果部分中最突出的观察结果,是放射科医生与医生进行交流的最重要部分。总结发现很耗时,对于缺乏经验的放射科医生可能会出错,因此自动印象产生引起了很大的关注。使用编码器框架框架,大多数先前的研究都探讨了纳入额外知识(例如,静态预定义的临床本体或额外的背景信息)。然而,他们通过单独的编码器编码这些知识,以将其视为其模型的额外输入,这在利用其与原始发现的关系方面受到限制。为了解决限制,我们提出了一个统一的框架,以综合的方式利用额外的知识和原始发现,以便可以以适当的方式提取关键信息(即关键词及其关系),以促进印象产生。详细说明,对于每个输入发现,它是由文本编码器编码的,并且图形是通过其实体和依赖树构造的。然后,采用图形编码器(例如,图形神经网络(GNNS))在构造的图中模拟关系信息。最后,为了强调调查结果中的关键词,引入了对比度学习以绘制正面样本(通过掩盖非钥匙单词构建)靠近并推开负面的样本(通过掩盖关键词构建)。 Openi和Mimic-CXR的实验结果证实了我们提出的方法的有效性。
The impression section of a radiology report summarizes the most prominent observation from the findings section and is the most important section for radiologists to communicate to physicians. Summarizing findings is time-consuming and can be prone to error for inexperienced radiologists, and thus automatic impression generation has attracted substantial attention. With the encoder-decoder framework, most previous studies explore incorporating extra knowledge (e.g., static pre-defined clinical ontologies or extra background information). Yet, they encode such knowledge by a separate encoder to treat it as an extra input to their models, which is limited in leveraging their relations with the original findings. To address the limitation, we propose a unified framework for exploiting both extra knowledge and the original findings in an integrated way so that the critical information (i.e., key words and their relations) can be extracted in an appropriate way to facilitate impression generation. In detail, for each input findings, it is encoded by a text encoder, and a graph is constructed through its entities and dependency tree. Then, a graph encoder (e.g., graph neural networks (GNNs)) is adopted to model relation information in the constructed graph. Finally, to emphasize the key words in the findings, contrastive learning is introduced to map positive samples (constructed by masking non-key words) closer and push apart negative ones (constructed by masking key words). The experimental results on OpenI and MIMIC-CXR confirm the effectiveness of our proposed method.