论文标题
合作数据驱动的建模
Cooperative data-driven modeling
论文作者
论文摘要
机械师中的数据驱动建模正在基于最近的机器学习进步迅速发展,尤其是在人工神经网络上。随着场地的成熟,不同组创建的新数据和模型将成为可用的合作建模的可能性。但是,人工神经网络遭受灾难性遗忘的困扰,即,他们在接受新任务时忘记了如何执行旧任务。这会阻碍合作,因为将现有模型适应新任务会影响其他人训练的先前任务的绩效。作者开发了一种持续的学习方法,该方法解决了这个问题,在这里首次将其应用于固体力学。特别是,该方法应用于复发性神经网络以预测依赖历史的可塑性行为,尽管它可以用于任何其他体系结构(进料,卷积等)并预测其他现象。这项工作旨在在不断学习上产生未来的发展,这些发展将促进机械界之间的合作策略,以解决越来越具有挑战性的问题。我们表明,所选的持续学习策略可以依次地学习几个本构定律而不忘记它们,使用较少的数据来实现与标准(非合作性)训练相同的每个模型训练。
Data-driven modeling in mechanics is evolving rapidly based on recent machine learning advances, especially on artificial neural networks. As the field matures, new data and models created by different groups become available, opening possibilities for cooperative modeling. However, artificial neural networks suffer from catastrophic forgetting, i.e. they forget how to perform an old task when trained on a new one. This hinders cooperation because adapting an existing model for a new task affects the performance on a previous task trained by someone else. The authors developed a continual learning method that addresses this issue, applying it here for the first time to solid mechanics. In particular, the method is applied to recurrent neural networks to predict history-dependent plasticity behavior, although it can be used on any other architecture (feedforward, convolutional, etc.) and to predict other phenomena. This work intends to spawn future developments on continual learning that will foster cooperative strategies among the mechanics community to solve increasingly challenging problems. We show that the chosen continual learning strategy can sequentially learn several constitutive laws without forgetting them, using less data to achieve the same error as standard (non-cooperative) training of one law per model.