论文标题
部分可观测时空混沌系统的无模型预测
GAN-Based Multi-View Video Coding with Spatio-Temporal EPI Reconstruction
论文作者
论文摘要
视频场景中多种观点的引入不可避免地增加了存储和传输所需的比特率。为了降低比特率,研究人员开发了在压缩和交付过程中跳过中间观点的方法,并最终使用侧面信息(SI)重建它们。通常,深度图用于构造SI。但是,他们的方法遭受重建和固有较高比特率的不准确性。在本文中,我们提出了一种新型的多视频视频编码方法,该方法利用生成对抗网络(GAN)的图像生成能力提高了SI的重建精度。此外,我们考虑将相邻时间和空间观点的信息合并,以进一步降低SI冗余。在编码器上,我们构建了一个时空表现平面图像(EPI),并进一步利用卷积网络将GAN的潜在代码提取为Si。在解码器侧,我们将SI和相邻的观点结合在一起,使用GAN发电机重建中间视图。具体而言,我们建立了重建成本和SI熵的联合编码器约束,以实现重建质量和钻头率在开销之间的最佳权衡。实验表明,与最先进的方法相比,实验的速率 - 渗透率(RD)的性能显着提高。
The introduction of multiple viewpoints in video scenes inevitably increases the bitrates required for storage and transmission. To reduce bitrates, researchers have developed methods to skip intermediate viewpoints during compression and delivery, and ultimately reconstruct them using Side Information (SI). Typically, depth maps are used to construct SI. However, their methods suffer from inaccuracies in reconstruction and inherently high bitrates. In this paper, we propose a novel multi-view video coding method that leverages the image generation capabilities of Generative Adversarial Network (GAN) to improve the reconstruction accuracy of SI. Additionally, we consider incorporating information from adjacent temporal and spatial viewpoints to further reduce SI redundancy. At the encoder, we construct a spatio-temporal Epipolar Plane Image (EPI) and further utilize a convolutional network to extract the latent code of a GAN as SI. At the decoder side, we combine the SI and adjacent viewpoints to reconstruct intermediate views using the GAN generator. Specifically, we establish a joint encoder constraint for reconstruction cost and SI entropy to achieve an optimal trade-off between reconstruction quality and bitrates overhead. Experiments demonstrate significantly improved Rate-Distortion (RD) performance compared with state-of-the-art methods.