论文标题
学习图形神经网络用于图像样式转移
Learning Graph Neural Networks for Image Style Transfer
论文作者
论文摘要
最先进的参数和非参数样式转移方法容易导致由于全球统计数据对准而导致的本地样式模式,或者由于补丁不匹配而导致的不愉快的伪像。在本文中,我们研究了一种新型的半参数神经风格转移框架,可减轻参数和非参数风格的缺陷。我们方法的核心思想是使用图神经网络(GNN)建立准确且细粒的内容样式的对应关系。为此,我们开发了一个详细的GNN模型,其中包含内容和样式的本地补丁作为图形顶点。然后,将样式转移过程建模为基于注意力的异质消息,以可学习的方式在样式和内容节点之间传递,从而在本地补丁级别导致自适应多一对一对一的风格相关性。此外,还引入了详细的可变形图卷积操作,以进行跨尺度样式符合匹配。实验结果表明,所提出的半参数图像样式化方法可以令人鼓舞的结果对具有挑战性的样式模式,从而保留了全球外观和精美的细节。此外,通过控制推理阶段的边缘数量,提出的方法还触发了新型功能,例如具有单个模型的基于斑块的多样化的风格化。
State-of-the-art parametric and non-parametric style transfer approaches are prone to either distorted local style patterns due to global statistics alignment, or unpleasing artifacts resulting from patch mismatching. In this paper, we study a novel semi-parametric neural style transfer framework that alleviates the deficiency of both parametric and non-parametric stylization. The core idea of our approach is to establish accurate and fine-grained content-style correspondences using graph neural networks (GNNs). To this end, we develop an elaborated GNN model with content and style local patches as the graph vertices. The style transfer procedure is then modeled as the attention-based heterogeneous message passing between the style and content nodes in a learnable manner, leading to adaptive many-to-one style-content correlations at the local patch level. In addition, an elaborated deformable graph convolutional operation is introduced for cross-scale style-content matching. Experimental results demonstrate that the proposed semi-parametric image stylization approach yields encouraging results on the challenging style patterns, preserving both global appearance and exquisite details. Furthermore, by controlling the number of edges at the inference stage, the proposed method also triggers novel functionalities like diversified patch-based stylization with a single model.