论文标题

SOK:对安全分析中的解释性建模,以解释性,可信度和可用性

SoK: Modeling Explainability in Security Analytics for Interpretability, Trustworthiness, and Usability

论文作者

Bhusal, Dipkamal, Shin, Rosalyn, Shewale, Ajay Ashok, Veerabhadran, Monish Kumar Manikya, Clifford, Michael, Rampazzi, Sara, Rastogi, Nidhi

论文摘要

可解释性,可信度和可用性是高利用安全应用程序中的主要考虑因素,尤其是在使用深度学习模型时。尽管这些模型以其高精度而闻名,但它们表现为黑匣子,在这种黑匣子中,很难识别导致分类或预测的重要特征和因素。这可能导致不确定性和不信任,尤其是当不正确的预测导致严重后果时。因此,解释方法旨在提供深入了解深度学习模型的内部工作的见解。但是,大多数解释方法都提供了不一致的解释,忠诚度较低,并且容易受到对抗操作的影响,从而可以降低模型的可信度。本文对可解释的方法进行了全面的分析,并在三个不同的安全应用程序中证明了它们的功效:使用系统日志,恶意软件预测和对抗图像检测的异常检测。我们的定量和定性分析揭示了所有三种应用中最先进的解释方法的严重限制和关注。我们表明,安全应用程序的解释方法需要不同的特征,例如稳定,忠诚,鲁棒性和可用性等,我们将其概述为值得信赖的解释方法的先决条件。

Interpretability, trustworthiness, and usability are key considerations in high-stake security applications, especially when utilizing deep learning models. While these models are known for their high accuracy, they behave as black boxes in which identifying important features and factors that led to a classification or a prediction is difficult. This can lead to uncertainty and distrust, especially when an incorrect prediction results in severe consequences. Thus, explanation methods aim to provide insights into the inner working of deep learning models. However, most explanation methods provide inconsistent explanations, have low fidelity, and are susceptible to adversarial manipulation, which can reduce model trustworthiness. This paper provides a comprehensive analysis of explainable methods and demonstrates their efficacy in three distinct security applications: anomaly detection using system logs, malware prediction, and detection of adversarial images. Our quantitative and qualitative analysis reveals serious limitations and concerns in state-of-the-art explanation methods in all three applications. We show that explanation methods for security applications necessitate distinct characteristics, such as stability, fidelity, robustness, and usability, among others, which we outline as the prerequisites for trustworthy explanation methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源