论文标题
利用触觉传感器用于假肢和机器人应用的低潜伏期嵌入式智能手
Leveraging Tactile Sensors for Low Latency Embedded Smart Hands for Prosthetic and Robotic Applications
论文作者
论文摘要
对于需要控制假体装置的机器人和人类截肢者,触觉传感是一种至关重要的感知模式。如今,机器人和假肢系统仍然缺少准确的触觉感应的重要特征。这种缺乏主要是由于现有的触觉技术具有有限的空间和时间分辨率,并且昂贵或不可扩展。在本文中,我们介绍了称为Smarthand的硬件软件嵌入式系统的设计和实现。它是专门设计的,目的是从手工形和机器人应用中从手工形的多传感器阵列中获取和实时处理高分辨率触觉信息。在数据收集过程中,我们的系统每秒可提供100帧的高吞吐量,比以前的相关工作高13.7倍。在五个不同的会话中,我们在与日常生物对象进行交互时收集了一个新的触觉数据集。我们提出了一个紧凑而准确的卷积神经网络,与相关工作相比,需要少一个数量级的记忆和15.6倍的计算,而不会降低分类精度。 TOP-1和TOP-3交叉验证精度分别为98.86%和99.83%。我们进一步分析了会议间的可变性,并获得了77.84%的最佳前3个外出验证精度。我们将受过训练的模型部署在高性能的ARM Cortex-M7微控制器上,仅实现了100 ms的推理时间最小化响应延迟。总体测得的功耗为505兆瓦。最后,我们制作了一个新的控制传感器,并执行其他实验,以提供有关传感器降解和滑动检测的分析。这项工作是向机器人和假肢设备具有触摸感的前进的一步,并展示了由微型机器学习增强的智能嵌入式系统的实用性。
Tactile sensing is a crucial perception mode for robots and human amputees in need of controlling a prosthetic device. Today robotic and prosthetic systems are still missing the important feature of accurate tactile sensing. This lack is mainly due to the fact that the existing tactile technologies have limited spatial and temporal resolution and are either expensive or not scalable. In this paper, we present the design and the implementation of a hardware-software embedded system called SmartHand. It is specifically designed to enable the acquisition and the real-time processing of high-resolution tactile information from a hand-shaped multi-sensor array for prosthetic and robotic applications. During data collection, our system can deliver a high throughput of 100 frames per second, which is 13.7x higher than previous related work. We collected a new tactile dataset while interacting with daily-life objects during five different sessions. We propose a compact yet accurate convolutional neural network that requires one order of magnitude less memory and 15.6x fewer computations compared to related work without degrading classification accuracy. The top-1 and top-3 cross-validation accuracies are respectively 98.86% and 99.83%. We further analyze the inter-session variability and obtain the best top-3 leave-one-out-validation accuracy of 77.84%. We deploy the trained model on a high-performance ARM Cortex-M7 microcontroller achieving an inference time of only 100 ms minimizing the response latency. The overall measured power consumption is 505 mW. Finally, we fabricate a new control sensor and perform additional experiments to provide analyses on sensor degradation and slip detection. This work is a step forward in giving robotic and prosthetic devices a sense of touch and demonstrates the practicality of a smart embedded system empowered by tiny machine learning.