论文标题

桥接复合材料和真实:朝向端到端的深度图像垫子

Bridging Composite and Real: Towards End-to-end Deep Image Matting

论文作者

Li, Jizhizi, Zhang, Jing, Maybank, Stephen J., Tao, Dacheng

论文摘要

从自然图像中提取准确的前景有益于许多下游应用,例如电影制作和增强现实。但是,毛茸茸的特征和前景的各种外观,例如动物和肖像,挑战现有的垫子方法,这些方法通常需要额外的用户输入,例如trimap或scribbles。为了解决这些问题,我们研究了语义和图像效果的独特作用,并将任务分解为两个平行的子任务:高级语义分割和低级细节均值。具体来说,我们提出了一种新颖的眼光和焦点垫网络(GFM),该网络(GFM)使用共享的编码器和两个独立的解码器以协作方式学习这两个任务,以端到端的自然图像垫子。此外,由于摄影任务中可用的自然图像的限制,以前的方法通常采用复合图像进行训练和评估,从而导致对现实世界图像的概括能力有限。在本文中,我们通过对前景图像和背景图像之间的各种差异进行全面分析来系统地研究复合图像和现实世界图像之间的域间隙问题。我们发现,旨在减少差异的经过精心设计的构图路线RSSN可以带来具有出色的概括能力的更好模型。此外,我们提供了一个基准测试,其中包含2,000个高分辨率现实世界的动物图像和10,000张肖像图像以及其手动标记的Alpha哑光,以作为测试床,以评估Matting模型对现实世界图像的概括能力。全面的经验研究表明,GFM的表现优于最先进的方法,并有效地减少了概括误差。代码和数据集将在https://github.com/jizhizili/gfm上发布。

Extracting accurate foregrounds from natural images benefits many downstream applications such as film production and augmented reality. However, the furry characteristics and various appearance of the foregrounds, e.g., animal and portrait, challenge existing matting methods, which usually require extra user inputs such as trimap or scribbles. To resolve these problems, we study the distinct roles of semantics and details for image matting and decompose the task into two parallel sub-tasks: high-level semantic segmentation and low-level details matting. Specifically, we propose a novel Glance and Focus Matting network (GFM), which employs a shared encoder and two separate decoders to learn both tasks in a collaborative manner for end-to-end natural image matting. Besides, due to the limitation of available natural images in the matting task, previous methods typically adopt composite images for training and evaluation, which result in limited generalization ability on real-world images. In this paper, we investigate the domain gap issue between composite images and real-world images systematically by conducting comprehensive analyses of various discrepancies between the foreground and background images. We find that a carefully designed composition route RSSN that aims to reduce the discrepancies can lead to a better model with remarkable generalization ability. Furthermore, we provide a benchmark containing 2,000 high-resolution real-world animal images and 10,000 portrait images along with their manually labeled alpha mattes to serve as a test bed for evaluating matting model's generalization ability on real-world images. Comprehensive empirical studies have demonstrated that GFM outperforms state-of-the-art methods and effectively reduces the generalization error. The code and the datasets will be released at https://github.com/JizhiziLi/GFM.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源