论文标题
疯狂:多元时间序列的自我监督掩蔽的异常检测任务
MAD: Self-Supervised Masked Anomaly Detection Task for Multivariate Time Series
论文作者
论文摘要
在本文中,我们介绍了蒙面的异常检测(MAD),这是一项一般自我监督的学习任务,用于多元时间序列序列检测。随着来自工业系统的传感器数据的越来越多,能够检测到多元时间序列数据流的异常非常重要。鉴于在现实世界应用中缺乏异常情况,大多数文献一直集中在建模正常上。由于该模型学会了捕获某些基本数据规律性的某些密钥,因此学习的正常表示可以增强异常检测能力。一个典型的公式是学习一个预测模型,即使用时间序列数据的窗口来预测未来的数据值。在本文中,我们提出了一项替代的自我监督学习任务。通过随机掩盖一部分输入并训练模型以使用其余的模型估算它们,MAD可以改进传统的左右下一步预测(NSP)任务。我们的实验结果表明,当使用完全相同的神经网络(NN)基本模型时,MAD可以比传统NSP方法获得更好的异常检测率,并且可以在同一硬件上进行测试时间时进行修改以与NSP模型一样快地运行,从而使其成为许多现有NSP基于NSP的NN NN Anomaly检测模型的理想升级。
In this paper, we introduce Masked Anomaly Detection (MAD), a general self-supervised learning task for multivariate time series anomaly detection. With the increasing availability of sensor data from industrial systems, being able to detecting anomalies from streams of multivariate time series data is of significant importance. Given the scarcity of anomalies in real-world applications, the majority of literature has been focusing on modeling normality. The learned normal representations can empower anomaly detection as the model has learned to capture certain key underlying data regularities. A typical formulation is to learn a predictive model, i.e., use a window of time series data to predict future data values. In this paper, we propose an alternative self-supervised learning task. By randomly masking a portion of the inputs and training a model to estimate them using the remaining ones, MAD is an improvement over the traditional left-to-right next step prediction (NSP) task. Our experimental results demonstrate that MAD can achieve better anomaly detection rates over traditional NSP approaches when using exactly the same neural network (NN) base models, and can be modified to run as fast as NSP models during test time on the same hardware, thus making it an ideal upgrade for many existing NSP-based NN anomaly detection models.