论文标题
Pynet-QXQ:CMOS图像传感器中QXQ拜耳模式的有效Pynet变体
PyNET-QxQ: An Efficient PyNET Variant for QxQ Bayer Pattern Demosaicing in CMOS Image Sensors
论文作者
论文摘要
用于移动相机的基于深度学习的图像信号处理器(ISP)模型可以生成与专业DSLR相机相媲美的高质量图像。但是,它们的计算需求通常使它们不适合移动设置。此外,现代移动摄像机采用非托盘颜色滤清器阵列(CFA),例如Quad Bayer,Nona Bayer和QXQ Bayer来提高图像质量,但大多数现有的基于深度学习的ISP(或Demosaicing)模型主要集中在标准的拜耳CFA上。在这项研究中,我们提出了Pynet-QXQ,这是一种专门为QXQ Bayer CFA模式设计的轻巧的演示模型,该模型源自原始Pynet。我们还提出了一种称为渐进蒸馏的知识蒸馏方法,以更有效地训练还原的网络。因此,Pynet-QXQ在保留其性能的同时,含有原始Pynet参数的2.5%。使用原始类型QXQ摄像机传感器捕获的QXQ图像的实验表明,尽管其纹理和边缘重建大大降低了参数计数,但Pynet-QXQ在纹理和边缘重建方面优于现有的常规算法。
Deep learning-based image signal processor (ISP) models for mobile cameras can generate high-quality images that rival those of professional DSLR cameras. However, their computational demands often make them unsuitable for mobile settings. Additionally, modern mobile cameras employ non-Bayer color filter arrays (CFA) such as Quad Bayer, Nona Bayer, and QxQ Bayer to enhance image quality, yet most existing deep learning-based ISP (or demosaicing) models focus primarily on standard Bayer CFAs. In this study, we present PyNET-QxQ, a lightweight demosaicing model specifically designed for QxQ Bayer CFA patterns, which is derived from the original PyNET. We also propose a knowledge distillation method called progressive distillation to train the reduced network more effectively. Consequently, PyNET-QxQ contains less than 2.5% of the parameters of the original PyNET while preserving its performance. Experiments using QxQ images captured by a proto type QxQ camera sensor show that PyNET-QxQ outperforms existing conventional algorithms in terms of texture and edge reconstruction, despite its significantly reduced parameter count.