论文标题

张量网络结构的自适应学习

Adaptive Learning of Tensor Network Structures

论文作者

Hashemizadeh, Meraj, Liu, Michelle, Miller, Jacob, Rabusseau, Guillaume

论文摘要

张量网络(TN)提供了有效代表非常高维对象的强大框架。 TN最近显示了它们在机器学习应用中的潜力,并提供了常见张量分解模型的统一视图,例如Tucker,Tensor Train(TT)和Tensor Ring(TR)。但是,从数据中确定最佳的张量网络结构是具有挑战性的。在这项工作中,我们利用TN形式主义来开发一种通用有效的适应性算法,共同学习数据中TN的结构和参数。我们的方法是基于一种简单的贪婪方法,从排名一张量,并依次识别出小等级增量的最有希望的张量网络边缘。我们的算法可以自适应地识别具有少量参数的TN结构,从而有效地优化任何可区分的目标函数。张量分解,张量完成和模型压缩任务的实验证明了所提出的算法的有效性。特别是,我们的方法优于最先进的进化拓扑搜索[Li and Sun,2020],用于张量的图像分解(同时更快的数量级),并找到有效的张量张量网络结构,以压缩神经网络以超过基于TT的流行方法[Novikov等人[Novikov等,2015]。

Tensor Networks (TN) offer a powerful framework to efficiently represent very high-dimensional objects. TN have recently shown their potential for machine learning applications and offer a unifying view of common tensor decomposition models such as Tucker, tensor train (TT) and tensor ring (TR). However, identifying the best tensor network structure from data for a given task is challenging. In this work, we leverage the TN formalism to develop a generic and efficient adaptive algorithm to jointly learn the structure and the parameters of a TN from data. Our method is based on a simple greedy approach starting from a rank one tensor and successively identifying the most promising tensor network edges for small rank increments. Our algorithm can adaptively identify TN structures with small number of parameters that effectively optimize any differentiable objective function. Experiments on tensor decomposition, tensor completion and model compression tasks demonstrate the effectiveness of the proposed algorithm. In particular, our method outperforms the state-of-the-art evolutionary topology search [Li and Sun, 2020] for tensor decomposition of images (while being orders of magnitude faster) and finds efficient tensor network structures to compress neural networks outperforming popular TT based approaches [Novikov et al., 2015].

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源