论文标题

通过减少关节压缩伪像减少和超分辨率来提高高级视力

Boosting High-Level Vision with Joint Compression Artifacts Reduction and Super-Resolution

论文作者

Xiang, Xiaoyu, Lin, Qian, Allebach, Jan P.

论文摘要

由于带宽和存储空间的限制,当网络通过网络传输时,数字图像通常会下调和压缩,从而导致细节丢失和刺耳的文物,从而降低了高级视觉任务的性能。在本文中,我们旨在通过探索关节压缩工件减少(CAR)和超分辨率(SR)任务来从低分辨率下产生无伪影的高分辨率图像。首先,我们提出了一个情境感知的关节汽车和SR神经网络(CAJNN),该网络(CAJNN)同时集成了本地和非本地特征,以在一个阶段中求解汽车和SR。最后,采用了深层重建网络来预测高质量和高分辨率图像。对CAR和SR基准数据集的评估表明,我们的CAJNN模型的表现优于以前的方法,并且运行时也需要26.2%。基于此模型,我们探索了高水平计算机视觉中的两个关键挑战:低分辨率文本的光学特征识别以及非常微小的面部检测。我们证明,Cajnn可以用作有效的图像预处理方法,并提高实体文本识别的准确性(从85.30%到85.75%)和微小面部检测的平均精度(从0.317到0.611)。

Due to the limits of bandwidth and storage space, digital images are usually down-scaled and compressed when transmitted over networks, resulting in loss of details and jarring artifacts that can lower the performance of high-level visual tasks. In this paper, we aim to generate an artifact-free high-resolution image from a low-resolution one compressed with an arbitrary quality factor by exploring joint compression artifacts reduction (CAR) and super-resolution (SR) tasks. First, we propose a context-aware joint CAR and SR neural network (CAJNN) that integrates both local and non-local features to solve CAR and SR in one-stage. Finally, a deep reconstruction network is adopted to predict high quality and high-resolution images. Evaluation on CAR and SR benchmark datasets shows that our CAJNN model outperforms previous methods and also takes 26.2% shorter runtime. Based on this model, we explore addressing two critical challenges in high-level computer vision: optical character recognition of low-resolution texts, and extremely tiny face detection. We demonstrate that CAJNN can serve as an effective image preprocessing method and improve the accuracy for real-scene text recognition (from 85.30% to 85.75%) and the average precision for tiny face detection (from 0.317 to 0.611).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源