论文标题
可扩展时空图神经网络
Scalable Spatiotemporal Graph Neural Networks
论文作者
论文摘要
时空时间序列的神经预测推动了几个相关应用领域的研究和工业创新。图神经网络(GNN)通常是预测体系结构的核心组成部分。但是,在大多数时空GNN中,计算复杂度比序列时间长度缩放到二次因子,图中链接的数量是图中的链接数,因此阻碍了这些模型在大图和长时间序列中的应用。尽管在静态图的背景下提出了提高可伸缩性的方法,但很少有研究工作专门用于时空情况。为了填补这一空白,我们提出了一个可扩展的体系结构,该体系结构利用了时间和空间动力学的有效编码。特别是,我们使用随机复发的神经网络将输入时间序列的历史嵌入到包括多尺度时间动力学的高维状态表示中。然后,使用图形邻接矩阵的不同幂沿空间维度沿空间维度传播,以生成以丰富的时空特征池为特征的节点嵌入。可以在不监督的方式中有效地预先计算所得的节点嵌入,然后将其馈送到馈送前向解码器,该解码器学会映射多规模时空表示形式。然后,可以通过对节点嵌入而无需破坏任何依赖性,从而使大型网络的可扩展性在不破坏任何依赖性的情况下通过对节点进行并行化训练过程。相关数据集的经验结果表明,我们的方法与最新技术的状态达到了竞争,同时大大减轻了计算负担。
Neural forecasting of spatiotemporal time series drives both research and industrial innovation in several relevant application domains. Graph neural networks (GNNs) are often the core component of the forecasting architecture. However, in most spatiotemporal GNNs, the computational complexity scales up to a quadratic factor with the length of the sequence times the number of links in the graph, hence hindering the application of these models to large graphs and long temporal sequences. While methods to improve scalability have been proposed in the context of static graphs, few research efforts have been devoted to the spatiotemporal case. To fill this gap, we propose a scalable architecture that exploits an efficient encoding of both temporal and spatial dynamics. In particular, we use a randomized recurrent neural network to embed the history of the input time series into high-dimensional state representations encompassing multi-scale temporal dynamics. Such representations are then propagated along the spatial dimension using different powers of the graph adjacency matrix to generate node embeddings characterized by a rich pool of spatiotemporal features. The resulting node embeddings can be efficiently pre-computed in an unsupervised manner, before being fed to a feed-forward decoder that learns to map the multi-scale spatiotemporal representations to predictions. The training procedure can then be parallelized node-wise by sampling the node embeddings without breaking any dependency, thus enabling scalability to large networks. Empirical results on relevant datasets show that our approach achieves results competitive with the state of the art, while dramatically reducing the computational burden.