论文标题
加固学习阀门
Reinforcement Learning for Control of Valves
论文作者
论文摘要
本文是一项关于加固学习(RL)作为控制非线性瓣膜的最佳控制策略的研究。它使用统一框架对其进行了PID(比例综合衍生)策略的评估。 RL是一种自主学习机制,通过与环境进行互动来学习。它在控制系统的世界中引起了人们的关注,这是为挑战动态和非线性过程构建最佳控制器的一种手段。发布的RL研究经常使用开源工具(Python和OpenAI健身环境)。我们使用MATLAB最近启动的(R2019A)增强学习工具箱来开发阀门控制器;使用DDPG(深度确定性的策略授予)算法和Simulink进行了训练,以模拟非线性阀并创建实验测试台以进行评估。 Simulink允许工业工程师快速适应并尝试其他选择的系统。结果表明,RL控制器非常擅长以速度跟踪信号,并且相对于参考信号产生较低的误差。但是,PID在干扰拒绝方面更好,因此为阀门提供了更长的寿命。成功的机器学习涉及调整许多需要大量时间和努力投资的超参数。我们将“分级学习”介绍为简化的,以应用程序为导向的更正式和算法“用于增强学习”的课程。通过实验显示,它有助于收敛复杂的非线性现实世界系统的学习任务。最后,从这项研究中获得的体验式学习与已发表的研究得到了证实。
This paper is a study of reinforcement learning (RL) as an optimal-control strategy for control of nonlinear valves. It is evaluated against the PID (proportional-integral-derivative) strategy, using a unified framework. RL is an autonomous learning mechanism that learns by interacting with its environment. It is gaining increasing attention in the world of control systems as a means of building optimal-controllers for challenging dynamic and nonlinear processes. Published RL research often uses open-source tools (Python and OpenAI Gym environments). We use MATLAB's recently launched (R2019a) Reinforcement Learning Toolbox to develop the valve controller; trained using the DDPG (Deep Deterministic Policy-Gradient) algorithm and Simulink to simulate the nonlinear valve and create the experimental test-bench for evaluation. Simulink allows industrial engineers to quickly adapt and experiment with other systems of their choice. Results indicate that the RL controller is extremely good at tracking the signal with speed and produces a lower error with respect to the reference signal. The PID, however, is better at disturbance rejection and hence provides a longer life for the valves. Successful machine learning involves tuning many hyperparameters requiring significant investment of time and efforts. We introduce "Graded Learning" as a simplified, application oriented adaptation of the more formal and algorithmic "Curriculum for Reinforcement Learning". It is shown via experiments that it helps converge the learning task of complex non-linear real world systems. Finally, experiential learnings gained from this research are corroborated against published research.