论文标题
“标记差距”:一个弱监督的自动目光估计
'Labelling the Gaps': A Weakly Supervised Automatic Eye Gaze Estimation
论文作者
论文摘要
在过去的几年中,在有限的监督下,在不受限制的环境中解释凝视方向的兴趣越来越大。由于数据策展和注释问题,将目光估计方法复制到其他平台(例如不受限制的户外或AR/VR)可能会导致性能大幅下降,因为对于模型培训的准确注释数据的可用性不足。在本文中,我们探讨了一个有趣但具有挑战性的凝视估计方法问题,其标记数据有限。所提出的方法将知识从标记的子集中提炼出具有视觉特征。包括特定身份的外观,凝视轨迹一致性和运动特征。给定凝视轨迹,该方法仅利用凝视序列的开始和终点的标签信息。提出的方法的扩展进一步减少了标记框架的需求,仅在生成标签的质量下略有下降的起始框架。我们评估了四个基准数据集(Cave,Tabletgaze,MPII和Gaze360)的建议方法以及Web craw的YouTube视频。我们提出的方法将注释工作降低到低至2.67%,对性能的影响很小。指示我们的模型的潜力实现了凝视估计的“野外”设置。
Over the past few years, there has been an increasing interest to interpret gaze direction in an unconstrained environment with limited supervision. Owing to data curation and annotation issues, replicating gaze estimation method to other platforms, such as unconstrained outdoor or AR/VR, might lead to significant drop in performance due to insufficient availability of accurately annotated data for model training. In this paper, we explore an interesting yet challenging problem of gaze estimation method with a limited amount of labelled data. The proposed method distills knowledge from the labelled subset with visual features; including identity-specific appearance, gaze trajectory consistency and motion features. Given a gaze trajectory, the method utilizes label information of only the start and the end frames of a gaze sequence. An extension of the proposed method further reduces the requirement of labelled frames to only the start frame with a minor drop in the generated label's quality. We evaluate the proposed method on four benchmark datasets (CAVE, TabletGaze, MPII and Gaze360) as well as web-crawled YouTube videos. Our proposed method reduces the annotation effort to as low as 2.67%, with minimal impact on performance; indicating the potential of our model enabling gaze estimation 'in-the-wild' setup.