论文标题

丹妮丝:阳性半足质矩阵的深鲁棒主成分分析

Denise: Deep Robust Principal Component Analysis for Positive Semidefinite Matrices

论文作者

Herrera, Calypso, Krach, Florian, Kratsios, Anastasis, Ruyssen, Pierre, Teichmann, Josef

论文摘要

隔离关键解释性特征时,强大的协方差矩阵PCA起着至关重要的作用。执行此类低级别以及稀疏分解的当前可用方法是矩阵特异性的,含义是,这些算法必须重新运行每个新矩阵。由于这些算法在计算上是昂贵的,因此最好学习和存储一个几乎可以在评估时瞬时执行该分解的函数。因此,我们介绍了Denise,这是一种基于学习的深度学习算法,用于鲁棒的协方差矩阵,或更普遍的对称阳性半限定矩阵,它准确地学习了这种功能。提供了丹妮丝的理论保证。其中包括一种新颖的通用近似定理,该定理适合我们的几何深度学习问题,并融合到学习问题的最佳解决方案。我们的实验表明,丹妮丝(Denise)在分解质量方面与最先进的性能相匹配,而比最先进的主要组件追求(PCP)和$ 200 \ times $ $ 2000 \ times $ $ $ \ $ 200 \ times $要比当前的速度速度开发方法快,快速PCP。

The robust PCA of covariance matrices plays an essential role when isolating key explanatory features. The currently available methods for performing such a low-rank plus sparse decomposition are matrix specific, meaning, those algorithms must re-run for every new matrix. Since these algorithms are computationally expensive, it is preferable to learn and store a function that nearly instantaneously performs this decomposition when evaluated. Therefore, we introduce Denise, a deep learning-based algorithm for robust PCA of covariance matrices, or more generally, of symmetric positive semidefinite matrices, which learns precisely such a function. Theoretical guarantees for Denise are provided. These include a novel universal approximation theorem adapted to our geometric deep learning problem and convergence to an optimal solution to the learning problem. Our experiments show that Denise matches state-of-the-art performance in terms of decomposition quality, while being approximately $2000\times$ faster than the state-of-the-art, principal component pursuit (PCP), and $200 \times$ faster than the current speed-optimized method, fast PCP.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源