论文标题
EYETAP:一种使用语音输入来解决MIDAS触摸问题的新技术
EyeTAP: A Novel Technique using Voice Inputs to Address the Midas Touch Problem for Gaze-based Interactions
论文作者
论文摘要
基于凝视的互动的主要挑战之一是能够将正常眼功能与与计算机系统的故意互动区分开,通常称为“ Midas Touch”。在本文中,我们提出了EYETAP(通过目标声脉冲进行注视点和选择)一种用于指数任务的无提交互方法。我们在两项独立的用户研究中评估了原型,每个原型包含两个具有33位参与者的实验,发现即使在音频输入信号中存在环境噪声,耐受性的耐受性也最大为70 dB,可实现更快的运动时间,并且任务完成时间更快,并且比语音识别较低。此外,在功能区实验中,EYETAP的错误率低于停留时间方法。这些特征使其适用于由于残疾而受到身体运动受到限制或不可能的用户。此外,EYETAP在用户界面设计方面没有特定的要求,因此可以轻松地集成到具有最小修改的现有系统中。 EYETAP可以被认为是解决MIDAS触摸的可接受替代方法。
One of the main challenges of gaze-based interactions is the ability to distinguish normal eye function from a deliberate interaction with the computer system, commonly referred to as 'Midas touch'. In this paper we propose, EyeTAP (Eye tracking point-and-select by Targeted Acoustic Pulse) a hands-free interaction method for point-and-select tasks. We evaluated the prototype in two separate user studies, each containing two experiments with 33 participants and found that EyeTAP is robust even in presence of ambient noise in the audio input signal with tolerance of up to 70 dB, results in a faster movement time, and faster task completion time, and has a lower cognitive workload than voice recognition. In addition, EyeTAP has a lower error rate than the dwell-time method in a ribbon-shaped experiment. These characteristics make it applicable for users for whom physical movements are restricted or not possible due to a disability. Furthermore, EyeTAP has no specific requirements in terms of user interface design and therefore it can be easily integrated into existing systems with minimal modifications. EyeTAP can be regarded as an acceptable alternative to address the Midas touch.