论文标题
热量:超边缘注意网络
HEAT: Hyperedge Attention Networks
论文作者
论文摘要
从结构化数据中学习是一项核心机器学习任务。通常,此类数据表示为图,通常仅考虑(键入)节点对之间的二进制关系。对于具有高度结构化数据的许多域而言,这是一个实质性的限制。一个重要的域是源代码,基于超图的表示可以更好地捕获代码的语义丰富和结构化的性质。 在这项工作中,我们提出了热量,这是一种能够代表键入和合格的超图的神经模型,每个Hyperdeed都明确符合参与节点的贡献。它可以看作是传递神经网络和变压器的消息的概括。我们使用新型程序的超图表表示知识库完成以及错误检测和维修的热量。在这两种情况下,它都优于强大的基线,表明其力量和通用性。
Learning from structured data is a core machine learning task. Commonly, such data is represented as graphs, which normally only consider (typed) binary relationships between pairs of nodes. This is a substantial limitation for many domains with highly-structured data. One important such domain is source code, where hypergraph-based representations can better capture the semantically rich and structured nature of code. In this work, we present HEAT, a neural model capable of representing typed and qualified hypergraphs, where each hyperedge explicitly qualifies how participating nodes contribute. It can be viewed as a generalization of both message passing neural networks and Transformers. We evaluate HEAT on knowledge base completion and on bug detection and repair using a novel hypergraph representation of programs. In both settings, it outperforms strong baselines, indicating its power and generality.