论文标题
模棱两可的超图神经网络
Equivariant Hypergraph Neural Networks
论文作者
论文摘要
计算机视觉和机器学习中的许多问题都可以作为对代表高阶关系的超图的学习。 HyperGraph Learning的最新方法基于消息传递扩展了图形神经网络,这在建模长期依赖性和表达能力方面很简单但根本上有限。另一方面,基于张量的模棱两可的神经网络具有最大的表现力,但是由于重量计算和对固定顺序超中件的严格假设,它们的应用受到了超图的限制。我们解决了这些问题,并目前呈现了模糊的超毛神经网络(EHNN),这是实现一般超图学习最大表达性的层次的首次尝试。我们还提出了基于超网络(EHNN-MLP)和自我注意力(EHNN-Transformer)的两个实用实现,这些实现易于实施,理论上比大多数消息传递方法更具表现力。我们证明了它们在一系列超图学习问题中的能力,包括综合K边缘识别,半监督分类和视觉关键点匹配,并报告对强烈消息传递基线的改进性能。我们的实施可从https://github.com/jw9730/ehnn获得。
Many problems in computer vision and machine learning can be cast as learning on hypergraphs that represent higher-order relations. Recent approaches for hypergraph learning extend graph neural networks based on message passing, which is simple yet fundamentally limited in modeling long-range dependencies and expressive power. On the other hand, tensor-based equivariant neural networks enjoy maximal expressiveness, but their application has been limited in hypergraphs due to heavy computation and strict assumptions on fixed-order hyperedges. We resolve these problems and present Equivariant Hypergraph Neural Network (EHNN), the first attempt to realize maximally expressive equivariant layers for general hypergraph learning. We also present two practical realizations of our framework based on hypernetworks (EHNN-MLP) and self-attention (EHNN-Transformer), which are easy to implement and theoretically more expressive than most message passing approaches. We demonstrate their capability in a range of hypergraph learning problems, including synthetic k-edge identification, semi-supervised classification, and visual keypoint matching, and report improved performances over strong message passing baselines. Our implementation is available at https://github.com/jw9730/ehnn.