论文标题

Stan:具有生成神经模型的合成网络流量产生

STAN: Synthetic Network Traffic Generation with Generative Neural Models

论文作者

Xu, Shengzhe, Marwah, Manish, Arlitt, Martin, Ramakrishnan, Naren

论文摘要

近年来,深度学习模型取得了巨大的成功,但是由于逼真的数据集缺乏,在网络安全等某些领域的进展受到阻碍。由于隐私原因,组织甚至在内部也不愿共享此类数据。一种替代方法是使用合成生成的数据,但现有方法的捕获属性和跨时间之间的复杂依赖性结构的能力受到限制。本文介绍了Stan(具有自回归神经模型的合成网络流量生成),该工具是为随后的下游应用程序生成逼真的合成网络流量数据集。我们新颖的神经结构在任何给定时间都捕获了属性之间的时间依赖性和依赖性。它将卷积神经层与混合物密度神经层和软磁性层整合在一起,并模型连续变量和离散变量。我们通过在模拟数据集和真实的网络流量数据集上对其生成的数据质量进行评估Stan的性能。最后,要回答这个问题 - 可以用合成数据代替实际网络流量数据以训练具有可比精度的模型吗?我们基于自学意识训练两个异常检测模型。结果表明,仅在合成数据上训练的模型的准确性只有很小的下降。尽管目前的结果在生成的数据质量以及培训数据中没有任何明显的数据泄漏方面令人鼓舞,但将来我们计划通过对生成的数据进行隐私攻击来进一步验证这一事实。其他未来的工作包括验证捕获长期依赖性并进行模型培训

Deep learning models have achieved great success in recent years but progress in some domains like cybersecurity is stymied due to a paucity of realistic datasets. Organizations are reluctant to share such data, even internally, due to privacy reasons. An alternative is to use synthetically generated data but existing methods are limited in their ability to capture complex dependency structures, between attributes and across time. This paper presents STAN (Synthetic network Traffic generation with Autoregressive Neural models), a tool to generate realistic synthetic network traffic datasets for subsequent downstream applications. Our novel neural architecture captures both temporal dependencies and dependence between attributes at any given time. It integrates convolutional neural layers with mixture density neural layers and softmax layers, and models both continuous and discrete variables. We evaluate the performance of STAN in terms of the quality of data generated, by training it on both a simulated dataset and a real network traffic data set. Finally, to answer the question - can real network traffic data be substituted with synthetic data to train models of comparable accuracy? We train two anomaly detection models based on self-supervision. The results show only a small decline in the accuracy of models trained solely on synthetic data. While current results are encouraging in terms of quality of data generated and absence of any obvious data leakage from training data, in the future we plan to further validate this fact by conducting privacy attacks on the generated data. Other future work includes validating capture of long term dependencies and making model training

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源