论文标题
使用自组织图创建可解释的入侵检测系统
Creating an Explainable Intrusion Detection System Using Self Organizing Maps
论文作者
论文摘要
现代人工智能(AI)启用了入侵检测系统(IDS)是复杂的黑匣子。这意味着安全分析师对IDS模型为何进行特定预测的原因几乎没有解释或澄清。解决此问题的潜在解决方案是基于可解释的人工智能(XAI)的当前能力研究和开发可解释的入侵检测系统(X-IDS)。在本文中,我们创建了一个基于自我组织地图(SOM)的X-IDS系统,该系统能够产生解释性的可视化。我们利用SOM的解释性来创建全球和本地解释。分析师可以使用全局解释来了解特定IDS模型如何计算预测的一般概念。为单个数据点生成了局部说明,以解释为什么计算某个预测值的原因。此外,使用NSL-KDD和CIC-IDS-2017数据集评估了我们基于SOM的X-IDS在解释生成和传统准确性测试中评估。
Modern Artificial Intelligence (AI) enabled Intrusion Detection Systems (IDS) are complex black boxes. This means that a security analyst will have little to no explanation or clarification on why an IDS model made a particular prediction. A potential solution to this problem is to research and develop Explainable Intrusion Detection Systems (X-IDS) based on current capabilities in Explainable Artificial Intelligence (XAI). In this paper, we create a Self Organizing Maps (SOMs) based X-IDS system that is capable of producing explanatory visualizations. We leverage SOM's explainability to create both global and local explanations. An analyst can use global explanations to get a general idea of how a particular IDS model computes predictions. Local explanations are generated for individual datapoints to explain why a certain prediction value was computed. Furthermore, our SOM based X-IDS was evaluated on both explanation generation and traditional accuracy tests using the NSL-KDD and the CIC-IDS-2017 datasets.