论文标题

图形分类的班级感知表示形式的改进框架

A Class-Aware Representation Refinement Framework for Graph Classification

论文作者

Xu, Jiaxing, Ni, Jinjie, Ke, Yiping

论文摘要

图神经网络(GNN)被广泛用于图形表示。尽管存在盛行,但GNN在图形分类任务中遭受了两个缺点,忽视了图级关系和概括问题。每个图在GNN消息传递/图形池中分别处理,并在每个单独的图表上操作过度拟合的现有方法。这使得图表在下游分类中学到的有效性降低。在本文中,我们为图形分类任务提出了一个班级感知表示的改进(CARE)框架。 CARE计算简单但功能强大的班级表示,并注入它们,以将图表的学习转向更好的类别可分离性。 Care是一个高度灵活的插件框架,能够在不显着增加计算成本的情况下结合任意GNN骨架。从理论上讲,我们还证明,通过VAPNIK-CHERVONENKIS(VC)维度分析,CARE具有比其GNN主链更好的上限。我们在9个基准数据集上使用11个著名的GNN骨架进行的广泛实验验证了护理的优势和有效性,而不是其GNN对应物。

Graph Neural Networks (GNNs) are widely used for graph representation learning. Despite its prevalence, GNN suffers from two drawbacks in the graph classification task, the neglect of graph-level relationships, and the generalization issue. Each graph is treated separately in GNN message passing/graph pooling, and existing methods to address overfitting operate on each individual graph. This makes the graph representations learnt less effective in the downstream classification. In this paper, we propose a Class-Aware Representation rEfinement (CARE) framework for the task of graph classification. CARE computes simple yet powerful class representations and injects them to steer the learning of graph representations towards better class separability. CARE is a plug-and-play framework that is highly flexible and able to incorporate arbitrary GNN backbones without significantly increasing the computational cost. We also theoretically prove that CARE has a better generalization upper bound than its GNN backbone through Vapnik-Chervonenkis (VC) dimension analysis. Our extensive experiments with 11 well-known GNN backbones on 9 benchmark datasets validate the superiority and effectiveness of CARE over its GNN counterparts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源