论文标题
有向无环图上的变压器
Transformers over Directed Acyclic Graphs
论文作者
论文摘要
变压器模型最近在图表学习中获得了知名度,因为它们有可能学习复杂的关系,而不是常规图神经网络所捕获的关系。主要的研究问题是如何将图形的结构偏置注入变压器结构中,并且已经为无向分子图做出了一些建议,以及最近也为较大的网络图提出了建议。 In this paper, we study transformers over directed acyclic graphs (DAGs) and propose architecture adaptations tailored to DAGs: (1) An attention mechanism that is considerably more efficient than the regular quadratic complexity of transformers and at the same time faithfully captures the DAG structure, and (2) a positional encoding of the DAG's partial order, complementing the former.我们严格地评估了各种任务的方法,从分类源代码图到引文网络中的节点,并表明它在两个重要方面有效:使图形变压器通常超过DAGS量身定制的图形神经网络,并在质量和效率方面提高SOTA图形变压器的性能。
Transformer models have recently gained popularity in graph representation learning as they have the potential to learn complex relationships beyond the ones captured by regular graph neural networks. The main research question is how to inject the structural bias of graphs into the transformer architecture, and several proposals have been made for undirected molecular graphs and, recently, also for larger network graphs. In this paper, we study transformers over directed acyclic graphs (DAGs) and propose architecture adaptations tailored to DAGs: (1) An attention mechanism that is considerably more efficient than the regular quadratic complexity of transformers and at the same time faithfully captures the DAG structure, and (2) a positional encoding of the DAG's partial order, complementing the former. We rigorously evaluate our approach over various types of tasks, ranging from classifying source code graphs to nodes in citation networks, and show that it is effective in two important aspects: in making graph transformers generally outperform graph neural networks tailored to DAGs and in improving SOTA graph transformer performance in terms of both quality and efficiency.