论文标题
通过最大凝集正规化图自动编码器处理丢失的数据
Handling Missing Data via Max-Entropy Regularized Graph Autoencoder
论文作者
论文摘要
图神经网络(GNN)是用于建模关系数据的流行武器。现有的GNN并未为属性分配图指定,这使缺少属性插补是一个燃烧的问题。直到最近,许多作品都注意到GNN与光谱浓度相结合,这意味着GNNS获得的光谱集中在频谱域中的局部零件上,例如由于超厚问题而引起的低频。结果,由于图频谱浓度倾向于引起低插图精度,因此重建图形属性可能会严重缺陷。在这项工作中,我们提出了一个名为Megae的图形属性插图的正规图自动编码器,该图名称为Megae,旨在通过最大化图形光谱熵来缓解光谱浓度问题。值得注意的是,我们首先提出了估计图谱熵的方法,而没有拉普拉斯矩阵的特征分类并提供理论上误差结合。然后,最大的熵正则作用在潜在空间中,该空间直接增加了谱系熵。广泛的实验表明,Megae的表现优于各种基准数据集上的所有其他最先进的插补方法。
Graph neural networks (GNNs) are popular weapons for modeling relational data. Existing GNNs are not specified for attribute-incomplete graphs, making missing attribute imputation a burning issue. Until recently, many works notice that GNNs are coupled with spectral concentration, which means the spectrum obtained by GNNs concentrates on a local part in spectral domain, e.g., low-frequency due to oversmoothing issue. As a consequence, GNNs may be seriously flawed for reconstructing graph attributes as graph spectral concentration tends to cause a low imputation precision. In this work, we present a regularized graph autoencoder for graph attribute imputation, named MEGAE, which aims at mitigating spectral concentration problem by maximizing the graph spectral entropy. Notably, we first present the method for estimating graph spectral entropy without the eigen-decomposition of Laplacian matrix and provide the theoretical upper error bound. A maximum entropy regularization then acts in the latent space, which directly increases the graph spectral entropy. Extensive experiments show that MEGAE outperforms all the other state-of-the-art imputation methods on a variety of benchmark datasets.