论文标题
MindGrasp:基于运动图像的新培训和测试框架,基于3维机器人控制
MIndGrasp: A New Training and Testing Framework for Motor Imagery Based 3-Dimensional Assistive Robotic Control
论文作者
论文摘要
随着全球年龄和残疾辅助机器人的增加,大脑计算机界面(BCI)经常被提议作为理解需要帮助的残疾人的意图的解决方案。大多数用于脑电图(EEG)基于脑电图的框架(MI)BCI控制均取决于笛卡尔空间中机器人的直接控制。但是,对于三维运动,这需要6个运动图像类,即使对于经验丰富的BCI用户来说,这也是一个困难的区别。在本文中,我们提出了一个模拟的培训和测试框架,该培训和测试框架将运动图像类的数量减少到4,同时仍在三维空间中抓住对象。这是通过基于半自治的基于眼视觉的控制机器人手臂来实现的,而用户控制的BCI则在向左和右侧实现运动,并朝着和远离感兴趣的对象移动。此外,该框架还包括一种直接在辅助机器人系统上训练BCI的方法,与使用标准培训方案(例如Graz-BCI)相比,应该更容易将其转移到现实世界中的辅助机器人。提出的结果不考虑真实的人类脑电图数据,而是作为与未来人类数据和系统的其他改进进行比较的基准。
With increasing global age and disability assistive robots are becoming more necessary, and brain computer interfaces (BCI) are often proposed as a solution to understanding the intent of a disabled person that needs assistance. Most frameworks for electroencephalography (EEG)-based motor imagery (MI) BCI control rely on the direct control of the robot in Cartesian space. However, for 3-dimensional movement, this requires 6 motor imagery classes, which is a difficult distinction even for more experienced BCI users. In this paper, we present a simulated training and testing framework which reduces the number of motor imagery classes to 4 while still grasping objects in three-dimensional space. This is achieved through semi-autonomous eye-in-hand vision-based control of the robotic arm, while the user-controlled BCI achieves movement to the left and right, as well as movement toward and away from the object of interest. Additionally, the framework includes a method of training a BCI directly on the assistive robotic system, which should be more easily transferrable to a real-world assistive robot than using a standard training protocol such as Graz-BCI. Presented results do not consider real human EEG data, but are rather shown as a baseline for comparison with future human data and other improvements on the system.