论文标题

关于FMM-FFT加速SIE模拟器中翻译操作员张量的压缩,通过张量分解

On the Compression of Translation Operator Tensors in FMM-FFT-Accelerated SIE Simulators via Tensor Decompositions

论文作者

Qian, Cheng, Yucel, Abdulkadir C.

论文摘要

提出了张量分解方法,以减少在快速多极方法快速傅立叶变换(FMM-FFT)加速表面积分方程(SIE)模拟器中产生的翻译操作员张量的内存需求。这些方法利用塔克,分层塔克(H-tucker)和张量列(TT)分解来压缩以三维(3D)和四维(4d)阵列格式存储的FFT'ED翻译操作员张量。进行了广泛的数值测试,以证明这些方法为不同的仿真参数引入的计算开销所获得的存储器节省和计算开销。数值结果表明,基于H-tucker的4D阵列格式的方法可产生最大的存储器保存,而基于Tucker的3D阵列格式的方法则引入了最小的计算开销。对于许多实际情况,所有方法论都会显着减少翻译操作员张量的内存需求,同时施加可忽略不计/可接受的计算开销。

Tensor decomposition methodologies are proposed to reduce the memory requirement of translation operator tensors arising in the fast multipole method-fast Fourier transform (FMM-FFT)-accelerated surface integral equation (SIE) simulators. These methodologies leverage Tucker, hierarchical Tucker (H-Tucker), and tensor train (TT) decompositions to compress the FFT'ed translation operator tensors stored in three-dimensional (3D) and four-dimensional (4D) array formats. Extensive numerical tests are performed to demonstrate the memory saving achieved by and computational overhead introduced by these methodologies for different simulation parameters. Numerical results show that the H-Tucker-based methodology for 4D array format yields the maximum memory saving while Tucker-based methodology for 3D array format introduces the minimum computational overhead. For many practical scenarios, all methodologies yield a significant reduction in the memory requirement of translation operator tensors while imposing negligible/acceptable computational overhead.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源