论文标题

在线和轻巧的内核近似策略迭代,用于动态P-norm线性自适应过滤

online and lightweight kernel-based approximated policy iteration for dynamic p-norm linear adaptive filtering

论文作者

Akiyama, Yuki, Vu, Minh, Slavakis, Konstantinos

论文摘要

本文介绍了一种解决方案,以动态选择(在线)``最佳''p-norm以对抗线性自适应过滤中的异常值,而无需任何了解异常值的概率密度函数。拟议的在线和数据驱动框架建立在基于内核的增强学习(KBRL)上。为此,引入了有关复制内核希尔伯特空间(RKHSS)的新颖贝尔曼映射。这些映射不需要关于马尔可夫决策过程的过渡概率的任何知识,并且在基本的希尔伯特式规范方面没有任何知识。提出的贝尔曼映射的定点集用于为当前的问题构建近似政策介绍(API)框架。为了解决RKHSS中的``维数诅咒'',将随机傅立叶特征用于绑定API的计算复杂性。关于几个异常值的合成数据的数值测试表明,所提出的API框架的出色性能超过了几个非RL和KBRL方案。

This paper introduces a solution to the problem of selecting dynamically (online) the ``optimal'' p-norm to combat outliers in linear adaptive filtering without any knowledge on the probability density function of the outliers. The proposed online and data-driven framework is built on kernel-based reinforcement learning (KBRL). To this end, novel Bellman mappings on reproducing kernel Hilbert spaces (RKHSs) are introduced. These mappings do not require any knowledge on transition probabilities of Markov decision processes, and are nonexpansive with respect to the underlying Hilbertian norm. The fixed-point sets of the proposed Bellman mappings are utilized to build an approximate policy-iteration (API) framework for the problem at hand. To address the ``curse of dimensionality'' in RKHSs, random Fourier features are utilized to bound the computational complexity of the API. Numerical tests on synthetic data for several outlier scenarios demonstrate the superior performance of the proposed API framework over several non-RL and KBRL schemes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源