论文标题
释放变压器对图的力量
Unleashing the Power of Transformer for Graphs
论文作者
论文摘要
尽管自然语言处理和计算机视觉最近取得了成功,但在处理图形时,变压器仍存在可伸缩性问题。对于大规模图,例如知识图,计算复杂性是不可接受的。一种解决方案是仅考虑近邻居,但是,这将失去变形金刚在任何距离上都参加元素的关键优点。在本文中,我们提出了一种新的变压器架构,称为双重编码变压器(DET)。 DET具有一个结构编码器,可以从连接的邻居和语义编码器中汇总信息,以专注于语义上有用的远处节点。与诉诸于多跳的邻居相比,DET通过自学训练寻求所需的遥远邻居。我们进一步发现这两个编码器可以合并以提高彼此的性能。我们的实验表明,与以各种大小的分子,网络和知识图处理相比,与各自的最新方法相比,DET取得了卓越的性能。
Despite recent successes in natural language processing and computer vision, Transformer suffers from the scalability problem when dealing with graphs. The computational complexity is unacceptable for large-scale graphs, e.g., knowledge graphs. One solution is to consider only the near neighbors, which, however, will lose the key merit of Transformer to attend to the elements at any distance. In this paper, we propose a new Transformer architecture, named dual-encoding Transformer (DET). DET has a structural encoder to aggregate information from connected neighbors and a semantic encoder to focus on semantically useful distant nodes. In comparison with resorting to multi-hop neighbors, DET seeks the desired distant neighbors via self-supervised training. We further find these two encoders can be incorporated to boost each others' performance. Our experiments demonstrate DET has achieved superior performance compared to the respective state-of-the-art methods in dealing with molecules, networks and knowledge graphs with various sizes.