论文标题

在多对象增强现实环境中引起多模式的手势+语音互动

Eliciting Multimodal Gesture+Speech Interactions in a Multi-Object Augmented Reality Environment

论文作者

Zhou, Xiaoyan, Williams, Adam S., Ortega, Francisco R.

论文摘要

随着增强现实技术和硬件变得越来越成熟和负担得起,研究人员一直在探索对沉浸式环境的更直观和可发现的交互技术。在本文中,我们研究了在多对象虚拟环境中3D对象操纵的多模式相互作用。为了识别用户定义的手势,我们进行了一项启发研究,涉及22名具有增强现实耳机的参与者的参与者。它产生了528项提案,并产生了一个胜利的手势,并在安装并对所有手势提案进行排名后,并进行了25个手势。我们发现,对于同一任务,对于一个和两个对象操纵,尽管两只手在两个对象方案中都使用了相同的手势。我们介绍了手势和语音结果,以及与单个对象虚拟环境中的类似研究相比的差异。该研究还探讨了物体操纵过程中语音表达和手势中风之间的关联,这可以提高增强现实耳机的识别效率。

As augmented reality technology and hardware become more mature and affordable, researchers have been exploring more intuitive and discoverable interaction techniques for immersive environments. In this paper, we investigate multimodal interaction for 3D object manipulation in a multi-object virtual environment. To identify the user-defined gestures, we conducted an elicitation study involving 24 participants for 22 referents with an augmented reality headset. It yielded 528 proposals and generated a winning gesture set with 25 gestures after binning and ranking all gesture proposals. We found that for the same task, the same gesture was preferred for both one and two object manipulation, although both hands were used in the two object scenario. We presented the gestures and speech results, and the differences compared to similar studies in a single object virtual environment. The study also explored the association between speech expressions and gesture stroke during object manipulation, which could improve the recognizer efficiency in augmented reality headsets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源