论文标题

COVECHO资源约束肺超声图像分析工具,用于更快的分盘和主动学习

covEcho Resource constrained lung ultrasound image analysis tool for faster triaging and active learning

论文作者

Joseph, Jinu, Panicker, Mahesh Raveendranatha, Chen, Yale Tung, Chandrasekharan, Kesavadas, Mondy, Vimal Chacko, Ayyappan, Anoop, Valakkada, Jineesh, Narayan, Kiran Vishnu

论文摘要

肺超声(LUS)可能是唯一可用于肺部连续和周期性监测的医学成像方式。这对于在肺部感染开始期间跟踪肺表现或跟踪疫苗接种对肺部的影响非常有用,如Covid-19中的肺部作用。有许多尝试将肺严重程度分为各个类别或自动分割各种LUS地标和表现形式的尝试。但是,所有这些方法均基于训练静态机器学习模型,该模型需要大量临床注释的大数据集,并且在计算上是沉重的,并且大部分时间非现实时间。在这项工作中,提出了一种实时重量的基于活跃的学习方法,以在资源约束设置中更快地在Covid-19受试者中进行分类。该工具基于您的外观一次(YOLO)网络,具有基于对各种LUS地标,人工制品和表现的识别,肺部感染严重程度的预测的能力,基于临床医生的反馈或图像质量的反馈或具有高度严重性分析的高度严重性和高度图像效果的大量框架的可能性。结果表明,对于LUS地标的预测,该提议的工具在联合(IOU)阈值的交集中的平均平均精度(MAP)为66%。 14MB轻型Yolov5S网络在Quadro P4000 GPU中运行时可实现123 fps。该工具可根据作者的要求提供和分析。

Lung ultrasound (LUS) is possibly the only medical imaging modality which could be used for continuous and periodic monitoring of the lung. This is extremely useful in tracking the lung manifestations either during the onset of lung infection or to track the effect of vaccination on lung as in pandemics such as COVID-19. There have been many attempts in automating the classification of severity of lung into various classes or automatic segmentation of various LUS landmarks and manifestations. However, all these approaches are based on training static machine learning models which require a significantly clinically annotated large dataset and are computationally heavy and most of the time non-real time. In this work, a real-time light weight active learning-based approach is presented for faster triaging in COVID-19 subjects in resource constrained settings. The tool, based on the you look only once (YOLO) network, has the capability of providing the quality of images based on the identification of various LUS landmarks, artefacts and manifestations, prediction of severity of lung infection, possibility of active learning based on the feedback from clinicians or on the image quality and a summarization of the significant frames which are having high severity of infection and high image quality for further analysis. The results show that the proposed tool has a mean average precision (mAP) of 66% at an Intersection over Union (IoU) threshold of 0.5 for the prediction of LUS landmarks. The 14MB lightweight YOLOv5s network achieves 123 FPS while running in a Quadro P4000 GPU. The tool is available for usage and analysis upon request from the authors.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源