论文标题
可扩展的深度学习加速拓扑优化,用于加上制造材料
Scalable Deep-Learning-Accelerated Topology Optimization for Additively Manufactured Materials
论文作者
论文摘要
拓扑优化(TO)是一种用于设计新颖结构,材料和设备的流行而有力的计算方法。两个计算挑战限制了对各种工业应用的适用性。首先,一个问题通常涉及大量的设计变量,以确保足够的表达能力。其次,许多问题需要大量昂贵的物理模型模拟,并且这些模拟不能平行。为了解决这些问题,我们提出了基于框架的一般可扩展学习(DL),称为SDL-TO,该框架利用了高性能计算中的并行方案(HPC),以加速设计加法制造(AM)材料的过程。与DL的现有研究不同,我们的框架通过学习迭代历史数据并同时培训给定设计及其梯度之间的映射来加速。替代梯度是通过在多个CPU上与多个GPU的分布式DL培训合并到多个CPU上的平行计算来学习的。学识渊博的梯度可以实现快速的在线更新方案,而不是基于物理模拟器或求解器的昂贵更新。使用本地采样策略,我们实现了降低设计空间的内在高维度,并提高训练准确性和SDL-TO框架的可扩展性。该方法通过基准示例和AM材料设计进行了热传导。与基线方法相比,提出的SDL到达框架表现出竞争性能,但大大降低了计算成本的速度,高于实施标准的速度约为8.6倍。
Topology optimization (TO) is a popular and powerful computational approach for designing novel structures, materials, and devices. Two computational challenges have limited the applicability of TO to a variety of industrial applications. First, a TO problem often involves a large number of design variables to guarantee sufficient expressive power. Second, many TO problems require a large number of expensive physical model simulations, and those simulations cannot be parallelized. To address these issues, we propose a general scalable deep-learning (DL) based TO framework, referred to as SDL-TO, which utilizes parallel schemes in high performance computing (HPC) to accelerate the TO process for designing additively manufactured (AM) materials. Unlike the existing studies of DL for TO, our framework accelerates TO by learning the iterative history data and simultaneously training on the mapping between the given design and its gradient. The surrogate gradient is learned by utilizing parallel computing on multiple CPUs incorporated with a distributed DL training on multiple GPUs. The learned TO gradient enables a fast online update scheme instead of an expensive update based on the physical simulator or solver. Using a local sampling strategy, we achieve to reduce the intrinsic high dimensionality of the design space and improve the training accuracy and the scalability of the SDL-TO framework. The method is demonstrated by benchmark examples and AM materials design for heat conduction. The proposed SDL-TO framework shows competitive performance compared to the baseline methods but significantly reduces the computational cost by a speed up of around 8.6x over the standard TO implementation.