论文标题
一种用于自动第三方校准,验证和伪造构成法律的非合作元模型游戏,并通过并行化的对抗性攻击
A non-cooperative meta-modeling game for automated third-party calibrating, validating, and falsifying constitutive laws with parallelized adversarial attacks
论文作者
论文摘要
对本构模型的评估,尤其是对于高风险和高温工程应用,需要有效而严格的第三方校准,验证和伪造。尽管有许多努力来开发范式和标准程序来验证模型,但由于常见采用的校准和验证过程的顺序,手动且通常是有偏见的性质,可能会出现困难,从而减慢了数据收集,从而阻碍了发现新物理学的进步,从而发现了新的物理学,可能会增加售价,并可能导致对差异性和应用型模型的误解。这项工作试图从游戏理论和机器学习技术中介绍概念,以克服许多现有困难。我们引入了一个自动元模型游戏,其中两个竞争的AI代理系统地生成了实验数据,以校准给定的本构模型并探索其弱点,以改善通过竞争的实验设计和模拟鲁棒性。这两个代理自动在对抗性增强学习框架中自动寻找元模型游戏的NASH平衡,而无需人工干预。通过将实验室实验的所有可能设计选项捕获到单个决策树中,我们将实验的设计作为组合动作游戏的设计,可以通过两个竞争者的深入强化学习来解决。我们的对手框架模仿了研究人员之间理想化的科学合作和竞争,以更好地了解学习物质定律的应用范围,并防止由常规AI基于AI的第三方验证引起的误解。
The evaluation of constitutive models, especially for high-risk and high-regret engineering applications, requires efficient and rigorous third-party calibration, validation and falsification. While there are numerous efforts to develop paradigms and standard procedures to validate models, difficulties may arise due to the sequential, manual and often biased nature of the commonly adopted calibration and validation processes, thus slowing down data collections, hampering the progress towards discovering new physics, increasing expenses and possibly leading to misinterpretations of the credibility and application ranges of proposed models. This work attempts to introduce concepts from game theory and machine learning techniques to overcome many of these existing difficulties. We introduce an automated meta-modeling game where two competing AI agents systematically generate experimental data to calibrate a given constitutive model and to explore its weakness, in order to improve experiment design and model robustness through competition. The two agents automatically search for the Nash equilibrium of the meta-modeling game in an adversarial reinforcement learning framework without human intervention. By capturing all possible design options of the laboratory experiments into a single decision tree, we recast the design of experiments as a game of combinatorial moves that can be resolved through deep reinforcement learning by the two competing players. Our adversarial framework emulates idealized scientific collaborations and competitions among researchers to achieve a better understanding of the application range of the learned material laws and prevent misinterpretations caused by conventional AI-based third-party validation.