论文标题

部分可观测时空混沌系统的无模型预测

Iterative Deep Homography Estimation

论文作者

Cao, Si-Yuan, Hu, Jianxin, Sheng, Zehua, Shen, Hui-Liang

论文摘要

我们提出了迭代同型网络,即IHN,这是一种新的深层同型估算体系结构。与以前通过网络级联或无法实现的IC-LK迭代器实现迭代完善的作品不同,IHN的迭代器已绑定重量,并且是完全可训练的。 IHN在包括具有挑战性的场景在内的几个数据集上实现了最先进的准确性。我们建议使用2个版本的IHN:(1)IHN用于静态场景,(2)IHN-MOV,用于带有移动对象的动态场景。这两个版本都可以在1级以提高效率或2级的精度上。我们表明,基本的1级IHN已经胜过大多数现有方法。在各种数据集上,2级IHN的表现都大大差距所有竞争对手。我们通过制作一个嵌入式面膜来介绍IHN-MOV,以进一步提高移动对象场景的估计准确性。我们通过实验表明,IHN的迭代框架可以减少95%的误差,同时大量节省网络参数。处理顺序图像对时,IHN可以达到32.7 fps,约为IC-LK迭代器的速度8倍。源代码可从https://github.com/imdumpl78/ihn获得。

We propose Iterative Homography Network, namely IHN, a new deep homography estimation architecture. Different from previous works that achieve iterative refinement by network cascading or untrainable IC-LK iterator, the iterator of IHN has tied weights and is completely trainable. IHN achieves state-of-the-art accuracy on several datasets including challenging scenes. We propose 2 versions of IHN: (1) IHN for static scenes, (2) IHN-mov for dynamic scenes with moving objects. Both versions can be arranged in 1-scale for efficiency or 2-scale for accuracy. We show that the basic 1-scale IHN already outperforms most of the existing methods. On a variety of datasets, the 2-scale IHN outperforms all competitors by a large gap. We introduce IHN-mov by producing an inlier mask to further improve the estimation accuracy of moving-objects scenes. We experimentally show that the iterative framework of IHN can achieve 95% error reduction while considerably saving network parameters. When processing sequential image pairs, IHN can achieve 32.7 fps, which is about 8x the speed of IC-LK iterator. Source code is available at https://github.com/imdumpl78/IHN.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源