论文标题
增强基于上下文的图像支出的剩余网络
Enhanced Residual Networks for Context-based Image Outpainting
论文作者
论文摘要
尽管人类在预测图像界限以外的事物方面表现良好,但深层模型却难以通过保留的信息来理解背景和推断。此任务称为图像支出,涉及生成图像边界的现实扩展。当前模型使用生成的对抗网络来生成缺乏局部图像特征一致性并显现伪造的结果。我们提出了两种改善此问题的方法:使用本地和全局歧视器,以及在网络的编码部分中添加残差块。与当前方法相比,与当前方法相比,我们模型的比较和基线的L1损失,平均误差(MSE)损失(MSE)损失(MSE)损失(MSE)损失(MSE)损失(MSE)损失(MSE)损失(MSE)损失(MSE)损失(MSE)损失(MSE)损失(MSE)损失(MSE)损失(MSE),我们的模型能够自然扩展对象边界并与当前方法相比产生更大的内部一致图像,但会产生较低的忠诚度图像。
Although humans perform well at predicting what exists beyond the boundaries of an image, deep models struggle to understand context and extrapolation through retained information. This task is known as image outpainting and involves generating realistic expansions of an image's boundaries. Current models use generative adversarial networks to generate results which lack localized image feature consistency and appear fake. We propose two methods to improve this issue: the use of a local and global discriminator, and the addition of residual blocks within the encoding section of the network. Comparisons of our model and the baseline's L1 loss, mean squared error (MSE) loss, and qualitative differences reveal our model is able to naturally extend object boundaries and produce more internally consistent images compared to current methods but produces lower fidelity images.