论文标题
稳定面对面的骨干可更换微调框架
A Backbone Replaceable Fine-tuning Framework for Stable Face Alignment
论文作者
论文摘要
基于热图回归的面部比对在静态图像上取得了显着性能。但是,在动态视频上应用现有方法时,稳定性和准确性会非常折扣。我们将降解归因于随机噪声和运动模糊,这在视频中很常见。时间信息对于解决此问题至关重要,但在现有作品中尚未完全考虑。在本文中,我们从两个角度访问了面向视频的面部对齐问题:检测准确性更喜欢单个帧的误差较低,并且检测一致性力在相邻帧之间更好地稳定性。在此基础上,我们提出了一个抖动损失功能,该功能利用时间信息来抑制不准确和抖动的地标。抖动损失涉及一个新颖的框架,该框架在可替换网络上具有微调的ConvlstM结构。我们进一步证明,准确,稳定的地标与规范坐标中有重叠的不同区域相关联,基于该区域,提出的抖动损失促进了训练过程中的优化过程。所提出的框架在稳定性评估指标上至少提高了40%的提高,同时增强了检测准确性与最新方法。通常,它可以迅速将具有里程碑意义的探测器转换为面部图像的地标探测器,从而在不重新训练整个模型的情况下为视频表现出色。
Heatmap regression based face alignment has achieved prominent performance on static images. However, the stability and accuracy are remarkably discounted when applying the existing methods on dynamic videos. We attribute the degradation to random noise and motion blur, which are common in videos. The temporal information is critical to address this issue yet not fully considered in the existing works. In this paper, we visit the video-oriented face alignment problem in two perspectives: detection accuracy prefers lower error for a single frame, and detection consistency forces better stability between adjacent frames. On this basis, we propose a Jitter loss function that leverages temporal information to suppress inaccurate as well as jittered landmarks. The Jitter loss is involved in a novel framework with a fine-tuning ConvLSTM structure over a backbone replaceable network. We further demonstrate that accurate and stable landmarks are associated with different regions with overlaps in a canonical coordinate, based on which the proposed Jitter loss facilitates the optimization process during training. The proposed framework achieves at least 40% improvement on stability evaluation metrics while enhancing detection accuracy versus state-of-the-art methods. Generally, it can swiftly convert a landmark detector for facial images to a better-performing one for videos without retraining the entire model.