论文标题
透视转换层
Perspective Transformation Layer
论文作者
论文摘要
结合了反映观察者和对象之间相对位置变化计算机视觉和深度学习模型之间的相对位置变化的几何变换,近年来引起了很多关注。但是,现有的建议主要集中于仿射转化,而仿射转化不足以反映这种几何位置的变化。此外,当前解决方案通常应用神经网络模块来学习单个变换矩阵,该矩阵不仅忽略了多视图分析的重要性,而且还包括来自模块的额外训练参数,除了增加模型复杂性的变换矩阵参数外。在本文中,在深度学习的背景下提出了一个透视转换层。所提出的层可以学习同谱,因此反映了观察者和对象之间的几何位置。此外,通过直接训练其转换矩阵,单个建议的层可以在不考虑模块参数的情况下学习可调数的多个观点。实验和评估证实了所提出层的优越性。
Incorporating geometric transformations that reflect the relative position changes between an observer and an object into computer vision and deep learning models has attracted much attention in recent years. However, the existing proposals mainly focus on the affine transformation that is insufficient to reflect such geometric position changes. Furthermore, current solutions often apply a neural network module to learn a single transformation matrix, which not only ignores the importance of multi-view analysis but also includes extra training parameters from the module apart from the transformation matrix parameters that increase the model complexity. In this paper, a perspective transformation layer is proposed in the context of deep learning. The proposed layer can learn homography, therefore reflecting the geometric positions between observers and objects. In addition, by directly training its transformation matrices, a single proposed layer can learn an adjustable number of multiple viewpoints without considering module parameters. The experiments and evaluations confirm the superiority of the proposed layer.