论文标题

BSM损失:建模Fine_gred分类的核心不确定性的一种卓越方法

BSM loss: A superior way in modeling aleatory uncertainty of fine_grained classification

论文作者

Ge, Shuang, Yuan, Kehong, Han, Maokun, Sun, Desheng, Zhang, Huabin, Ye, Qiongyu

论文摘要

人工智能(AI)辅助方法在风险领域(例如疾病诊断)受到了很多关注。与疾病类型的分类不同,将医学图像归类为良性或恶性肿瘤是一项精细的任务。但是,大多数研究仅着重于提高诊断准确性,而忽略了模型可靠性的评估,这限制了其临床应用。对于临床实践,校准对过度参数化的模型和固有的噪声极为明显地提出了低DATA制度的主要挑战。特别是,我们发现建模与数据相关的不确定性更有利于置信度校准。与测试时间增强(TTA)相比,我们提出了一种通过混合数据增强策略进行修改的自举损失(BS损失)功能,该策略可以更好地校准预测性不确定性并捕获数据分布转换而无需额外推断时间。我们的实验表明,与标准数据增强,深度集合和MC辍学相比,混合(BSM)模型的BS损失(BSM)模型可以将预期校准误差(ECE)减半。在BSM模型下,不确定性与相似性之间的相关性高达-0.4428。此外,BSM模型能够感知室外数据的语义距离,这表明在现实世界中的临床实践中潜力很高。

Artificial intelligence(AI)-assisted method had received much attention in the risk field such as disease diagnosis. Different from the classification of disease types, it is a fine-grained task to classify the medical images as benign or malignant. However, most research only focuses on improving the diagnostic accuracy and ignores the evaluation of model reliability, which limits its clinical application. For clinical practice, calibration presents major challenges in the low-data regime extremely for over-parametrized models and inherent noises. In particular, we discovered that modeling data-dependent uncertainty is more conducive to confidence calibrations. Compared with test-time augmentation(TTA), we proposed a modified Bootstrapping loss(BS loss) function with Mixup data augmentation strategy that can better calibrate predictive uncertainty and capture data distribution transformation without additional inference time. Our experiments indicated that BS loss with Mixup(BSM) model can halve the Expected Calibration Error(ECE) compared to standard data augmentation, deep ensemble and MC dropout. The correlation between uncertainty and similarity of in-domain data is up to -0.4428 under the BSM model. Additionally, the BSM model is able to perceive the semantic distance of out-of-domain data, demonstrating high potential in real-world clinical practice.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源