论文标题

残留物:用于跨域遥感图像语义分割的缩放大小dualgan

ResiDualGAN: Resize-Residual DualGAN for Cross-Domain Remote Sensing Images Semantic Segmentation

论文作者

Zhao, Yang, Guo, Peng, Sun, Zihao, Chen, Xiuwan, Gao, Han

论文摘要

由于域间隙,在注释数据集上预测的遥感图像(RS)图像的语义分割模型(RS)图像的性能将大大降低。对抗性生成方法,例如Dualgan,用于未配对的图像到图像翻译,以最大程度地减少像素级域间隙,这是无监禁域适应性(UDA)的常见方法之一。但是,在执行RS图像翻译时,现有的图像翻译方法面临两个问题:1)忽略两个RS数据集之间的比例差异,这极大地影响了规模不变对象的准确性性能,2)忽略RS图像的真实对真实翻译的特征,这为模型训练带来了不稳定的因素。在本文中,提出了Resualgan用于RS图像翻译,其中使用网络内部的Resizer模块来解决RS数据集的尺度差异,并使用残留连接来增强真实真实图像的稳定性并改善了交叉域语义分割任务的性能。与输出空间适应方法相结合,所提出的方法极大地提高了共同基准的准确性性能,这证明了残基的优势和可靠性。在本文的结尾,还进行了彻底的讨论,以合理地解释残留物的改善。我们的源代码可在https://github.com/miemieyanga/residualgan-drdg上找到。

The performance of a semantic segmentation model for remote sensing (RS) images pretrained on an annotated dataset would greatly decrease when testing on another unannotated dataset because of the domain gap. Adversarial generative methods, e.g., DualGAN, are utilized for unpaired image-to-image translation to minimize the pixel-level domain gap, which is one of the common approaches for unsupervised domain adaptation (UDA). However, the existing image translation methods are facing two problems when performing RS images translation: 1) ignoring the scale discrepancy between two RS datasets which greatly affects the accuracy performance of scale-invariant objects, 2) ignoring the characteristic of real-to-real translation of RS images which brings an unstable factor for the training of the models. In this paper, ResiDualGAN is proposed for RS images translation, where an in-network resizer module is used for addressing the scale discrepancy of RS datasets, and a residual connection is used for strengthening the stability of real-to-real images translation and improving the performance in cross-domain semantic segmentation tasks. Combined with an output space adaptation method, the proposed method greatly improves the accuracy performance on common benchmarks, which demonstrates the superiority and reliability of ResiDuanGAN. At the end of the paper, a thorough discussion is also conducted to give a reasonable explanation for the improvement of ResiDualGAN. Our source code is available at https://github.com/miemieyanga/ResiDualGAN-DRDG.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源