论文标题
Drlinfluids - 一个开源的Python耦合辅助钢筋学习和OpenFOAM的平台
DRLinFluids -- An open-source python platform of coupling Deep Reinforcement Learning and OpenFOAM
论文作者
论文摘要
我们提出了一个开源Python平台,用于在流体力学中进行深入增强学习(DRL)的应用。 DRL已被广泛用于优化非线性和高维问题的决策。在这里,代理通过在环境中行事来学习反馈政策,从而最大程度地提高了累积奖励。用控制理论术语,累积奖励将与成本函数,执行器的代理,环境对所测量的信号以及反馈法的学习政策。因此,DRL假定一个交互环境或等效地控制植物。具有DRL的数值模拟厂的设置是具有挑战性且耗时的。在这项工作中,为此目的开发了一个名为Drlinfluids的新型Python平台,其中DRL用于流体力学中的流量控制和优化问题。该模拟在行业和学术界使用OpenFOAM作为流行,灵活的Navier-Stokes求解器,而Tensorforce或Tianshou作为广泛使用的多功能DRL软件包。对于两个唤醒稳定基准问题,证明了Drlinfluids的可靠性和效率。 Drlinfluids大大减少了DRL在流体力学中的应用工作,并有望大大加速学术和工业应用。
We propose an open-source python platform for applications of Deep Reinforcement Learning (DRL) in fluid mechanics. DRL has been widely used in optimizing decision-making in nonlinear and high-dimensional problems. Here, an agent maximizes a cumulative reward with learning a feedback policy by acting in an environment. In control theory terms, the cumulative reward would correspond to the cost function, the agent to the actuator, the environment to the measured signals and the learned policy to the feedback law. Thus, DRL assumes an interactive environment or, equivalently, control plant. The setup of a numerical simulation plant with DRL is challenging and time-consuming. In this work, a novel python platform, named DRLinFluids is developed for this purpose, with DRL for flow control and optimization problems in fluid mechanics. The simulations employ OpenFOAM as popular, flexible Navier-Stokes solver in industry and academia, and Tensorforce or Tianshou as widely used versatile DRL packages. The reliability and efficiency of DRLinFluids are demonstrated for two wake stabilization benchmark problems. DRLinFluids significantly reduces the application effort of DRL in fluid mechanics and is expected to greatly accelerates academic and industrial applications.