论文标题

训练,学习,扩展,重复

Train, Learn, Expand, Repeat

论文作者

Parida, Abhijeet, Sankar, Aadhithya, Eisawy, Rami, Finck, Tom, Wiestler, Benedikt, Pfister, Franz, Moosbauer, Julia

论文摘要

高质量的标签数据对于成功培训监督机器学习模型至关重要。尽管医疗领域中存在大量未标记的数据,但标签构成了一个主要挑战:可以熟练地标记数据的医疗专业人员是一种稀缺且昂贵的资源。更糟糕的是,数据的数据划分(例如,对于细分任务)是乏味的,并且具有较高的评估者间差异,因此很大程度上限制了可用的培训数据。我们提出了一种递归训练策略,以执行语义分割的任务,只有很少的具有像素级注释的培训样本。我们通过使用递归培训策略进行了更便宜的图像级注释来扩展这种小型训练套件。我们将此技术应用于CT(计算机断层扫描)大脑中颅内出血(ICH)的分割,其中通常很少有带注释的数据。

High-quality labeled data is essential to successfully train supervised machine learning models. Although a large amount of unlabeled data is present in the medical domain, labeling poses a major challenge: medical professionals who can expertly label the data are a scarce and expensive resource. Making matters worse, voxel-wise delineation of data (e.g. for segmentation tasks) is tedious and suffers from high inter-rater variance, thus dramatically limiting available training data. We propose a recursive training strategy to perform the task of semantic segmentation given only very few training samples with pixel-level annotations. We expand on this small training set having cheaper image-level annotations using a recursive training strategy. We apply this technique on the segmentation of intracranial hemorrhage (ICH) in CT (computed tomography) scans of the brain, where typically few annotated data is available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源