论文标题
神经肌肉加固学习通过FES攻击人类四肢
Neuromuscular Reinforcement Learning to Actuate Human Limbs through FES
论文作者
论文摘要
功能电刺激(FES)是一种通过低能电信号引起肌肉收缩的技术。 FES可以使瘫痪的肢体动画。但是,关于如何应用FES实现所需运动的开放挑战仍然存在。人体的复杂性和肌肉反应的非平稳性引起了这一挑战。前者在执行逆动力学方面造成困难,而后者会导致控制性能在延长的使用期间降解。在这里,我们通过数据驱动的方法参与挑战。具体而言,我们学会通过加强学习(RL)来控制FES,该学习可以自动自定义患者的刺激。但是,RL通常具有Markovian假设,而FES控制系统由于非平稳性而为非马克维亚。为了解决这个问题,我们使用经常性的神经网络来创建马尔可夫状态表示。我们将FES控件投入了RL问题,并训练RL代理在模拟和现实世界中的不同环境中控制FES。结果表明,与PID控制器相比,我们的RL控制器可以长期保持控制性能,并具有更好的刺激特性。
Functional Electrical Stimulation (FES) is a technique to evoke muscle contraction through low-energy electrical signals. FES can animate paralysed limbs. Yet, an open challenge remains on how to apply FES to achieve desired movements. This challenge is accentuated by the complexities of human bodies and the non-stationarities of the muscles' responses. The former causes difficulties in performing inverse dynamics, and the latter causes control performance to degrade over extended periods of use. Here, we engage the challenge via a data-driven approach. Specifically, we learn to control FES through Reinforcement Learning (RL) which can automatically customise the stimulation for the patients. However, RL typically has Markovian assumptions while FES control systems are non-Markovian because of the non-stationarities. To deal with this problem, we use a recurrent neural network to create Markovian state representations. We cast FES controls into RL problems and train RL agents to control FES in different settings in both simulations and the real world. The results show that our RL controllers can maintain control performances over long periods and have better stimulation characteristics than PID controllers.