论文标题
合成有效的安全证书,用于使用大小正规化的基于学习的安全控制
Synthesize Efficient Safety Certificates for Learning-Based Safe Control using Magnitude Regularization
论文作者
论文摘要
基于能量功能的安全证书可以为复杂机器人系统的安全控制任务提供可证明的安全保证。但是,所有有关基于学习的能量函数合成的最新研究仅考虑可行性,这可能会导致过度保存并导致效率较低的控制器。在这项工作中,我们提出了幅度的正规化技术,以通过降低能量功能内部的保守性,同时保持有希望的可证明的安全保证,以提高安全控制器的效率。具体而言,我们通过能量函数的幅度来量化保守性,并通过在合成损失中增加幅度的正则化项来降低保守性。我们提出了使用加固学习(RL)的SAFEMR算法来统一安全控制器和能量功能的学习过程。实验结果表明,所提出的方法确实会降低能量功能的保守性,并在控制器效率方面优于基准,同时确保安全性。
Energy-function-based safety certificates can provide provable safety guarantees for the safe control tasks of complex robotic systems. However, all recent studies about learning-based energy function synthesis only consider the feasibility, which might cause over-conservativeness and result in less efficient controllers. In this work, we proposed the magnitude regularization technique to improve the efficiency of safe controllers by reducing the conservativeness inside the energy function while keeping the promising provable safety guarantees. Specifically, we quantify the conservativeness by the magnitude of the energy function, and we reduce the conservativeness by adding a magnitude regularization term to the synthesis loss. We propose the SafeMR algorithm that uses reinforcement learning (RL) for the synthesis to unify the learning processes of safe controllers and energy functions. Experimental results show that the proposed method does reduce the conservativeness of the energy functions and outperforms the baselines in terms of the controller efficiency while guaranteeing safety.