论文标题
先进的深度学习方法论,用于前驱阶段的皮肤癌分类
Advanced Deep Learning Methodologies for Skin Cancer Classification in Prodromal Stages
论文作者
论文摘要
如今,技术辅助平台几乎在几乎每个领域都提供可靠的解决方案。在医学领域中的一种重要应用是初步阶段的皮肤癌分类,需要敏感和精确的数据分析。对于拟议的研究,使用了Kaggle皮肤癌数据集。拟议的研究包括两个主要阶段。在第一阶段,预处理图像以删除剪切器,从而产生精致版本的训练图像。为了实现这一目标,应用了锐化过滤器,然后使用脱毛算法。不同的图像质量测量指标,包括峰信号(PSNR),均方根误差(MSE),最大绝对平方偏差(MXERR)和平方标准的能量比/比率(L2RAT)来比较在应用预处理操作之前和应用后的整体图像质量。上述图像质量指标的结果证明,图像质量不会受到损害,但是通过应用预处理操作来升级。拟议的研究工作的第二阶段结合了深度学习方法,这些方法在病变摩尔的准确,精确和健壮的分类中起着必要的作用。通过使用两个最先进的深度学习模型:Inception-V3和Mobilenet进行了反映。实验结果表明,通过使用两个网络的图像的精制版本,训练和验证精度有了显着提高,但是,Inpection-V3网络能够实现更好的验证精度,因此最终被选择以在测试数据上对其进行评估。使用TARCENTION-V3网络的最终测试精度为86%。
Technology-assisted platforms provide reliable solutions in almost every field these days. One such important application in the medical field is the skin cancer classification in preliminary stages that need sensitive and precise data analysis. For the proposed study the Kaggle skin cancer dataset is utilized. The proposed study consists of two main phases. In the first phase, the images are preprocessed to remove the clutters thus producing a refined version of training images. To achieve that, a sharpening filter is applied followed by a hair removal algorithm. Different image quality measurement metrics including Peak Signal to Noise (PSNR), Mean Square Error (MSE), Maximum Absolute Squared Deviation (MXERR) and Energy Ratio/ Ratio of Squared Norms (L2RAT) are used to compare the overall image quality before and after applying preprocessing operations. The results from the aforementioned image quality metrics prove that image quality is not compromised however it is upgraded by applying the preprocessing operations. The second phase of the proposed research work incorporates deep learning methodologies that play an imperative role in accurate, precise and robust classification of the lesion mole. This has been reflected by using two state of the art deep learning models: Inception-v3 and MobileNet. The experimental results demonstrate notable improvement in train and validation accuracy by using the refined version of images of both the networks, however, the Inception-v3 network was able to achieve better validation accuracy thus it was finally selected to evaluate it on test data. The final test accuracy using state of art Inception-v3 network was 86%.