论文标题
使用卷积神经网络自动识别超声图像的肌腱病
Automatic Recognition of the Supraspinatus Tendinopathy from Ultrasound Images using Convolutional Neural Networks
论文作者
论文摘要
肌腱损伤,如肌腱病,全厚和部分厚度的撕裂很普遍,肩s肌肌腱(SST)是肩袖中最脆弱的肌腱(SST)。 SST肌腱病的早期诊断非常重要,并且使用超声成像很难实现。在本文中,已经提出了基于卷积神经网络的自动肌腱病识别框架来协助诊断。该框架具有肌腱分割和分类的两个基本部分。肌腱分割是通过新型网络Nasunet进行的,该网络遵循编码器架构范例,并利用多尺度扩大的单元格。此外,已经提出了一条通用分类管道,以用于肌腱病识别,该管道支持不同的基本模型作为特征提取器引擎。引入了包括肌腱区域的位置信息的两个特征图作为网络输入,以使分类网络空间感知。为了评估肌腱病识别系统,已经获取了由100张超声图像组成的数据集,其中肌腱病例通过磁共振成像进行了双重验证。在细分和分类任务中,缺乏培训数据已通过合并知识传输,转移学习和数据增强技术来补偿。在交叉验证实验中,提出的肌腱病识别模型可实现91%的精度,86.67%的敏感性和92.86%的特异性,显示出针对其他模型的最新性能。
Tendon injuries like tendinopathies, full and partial thickness tears are prevalent, and the supraspinatus tendon (SST) is the most vulnerable ones in the rotator cuff. Early diagnosis of SST tendinopathies is of high importance and hard to achieve using ultrasound imaging. In this paper, an automatic tendinopathy recognition framework based on convolutional neural networks has been proposed to assist the diagnosis. This framework has two essential parts of tendon segmentation and classification. Tendon segmentation is done through a novel network, NASUNet, which follows an encoder-decoder architecture paradigm and utilizes a multi-scale Enlarging cell. Moreover, a general classification pipeline has been proposed for tendinopathy recognition, which supports different base models as the feature extractor engine. Two feature maps comprising positional information of the tendon region have been introduced as the network input to make the classification network spatial-aware. To evaluate the tendinopathy recognition system, a data set consisting of 100 SST ultrasound images have been acquired, in which tendinopathy cases are double-verified by magnetic resonance imaging. In both segmentation and classification tasks, lack of training data has been compensated by incorporating knowledge transferring, transfer learning, and data augmentation techniques. In cross-validation experiments, the proposed tendinopathy recognition model achieves 91% accuracy, 86.67% sensitivity, and 92.86% specificity, showing state-of-the-art performance against other models.