论文标题

基于人类机器人相互作用的凝视固定的自然掌握意图识别

Natural grasp intention recognition based on gaze fixation in human-robot interaction

论文作者

Yang, Bo, Huang, Jian, Li, Xiaolong, Chen, Xinxing, Xiong, Caihua, Hasegawa, Yasuhisa

论文摘要

眼动与肢体动作密切相关,因此可以用来推断运动意图。更重要的是,在某些情况下,眼睛运动是瘫痪和受损的严重运动障碍患者与环境交流和互动的唯一方法。尽管如此,作为意图识别方法,令人眼前的技术仍然具有非常有限的应用程序方案。本文的目的是实现一种基于自然固定的掌握意图识别方法,使用手动障碍的用户可以直接地表达他/她想通过直接查看感兴趣的对象来表达他/她想做的任务。为了实现这一目标,我们设计了实验,以研究不同任务中固定的关系。我们提出了这些关系中的一些定量特征,并从统计上分析它们。然后,我们设计了一种自然方法来掌握意图识别。实验结果证明,提出的抓握意图识别方法的准确性超过训练对象的89 \%。当将此方法扩展到训练集未包含的对象时,平均准确度超过85 \%。实际环境中的抓握实验验证了所提出的方法的有效性。

Eye movement is closely related to limb actions, so it can be used to infer movement intentions. More importantly, in some cases, eye movement is the only way for paralyzed and impaired patients with severe movement disorders to communicate and interact with the environment. Despite this, eye-tracking technology still has very limited application scenarios as an intention recognition method. The goal of this paper is to achieve a natural fixation-based grasping intention recognition method, with which a user with hand movement disorders can intuitively express what tasks he/she wants to do by directly looking at the object of interest. Toward this goal, we design experiments to study the relationships of fixations in different tasks. We propose some quantitative features from these relationships and analyze them statistically. Then we design a natural method for grasping intention recognition. The experimental results prove that the accuracy of the proposed method for the grasping intention recognition exceeds 89\% on the training objects. When this method is extendedly applied to objects not included in the training set, the average accuracy exceeds 85\%. The grasping experiment in the actual environment verifies the effectiveness of the proposed method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源