论文标题
高阶调谐器中的并发学习以进行参数识别
Concurrent learning in high-order tuners for parameter identification
论文作者
论文摘要
高阶调谐器是与经典基于梯度的算法相比,在识别参数模型的参数和/或促进对照或优化算法的进步方面,其适应性行为依赖于此类模型的算法表现出更高的效率。对于高阶调谐器,稳健的稳定性特性,即均匀的全局渐近(和指数)稳定性,目前依赖于持续的激发(PE)条件。在这项工作中,我们通过基于Matrosov定理进行新的分析来建立这种稳定性,然后证明可以通过由采样的数据点驱动的并发学习技术放宽PE要求,这些数据点足够丰富。我们从数字上表明,并发学习可能会大大提高效率。我们合并了保留稳定性保证的重置方法,同时提供了可能与高度准确的参数估计相关的应用程序相关的其他改进,以相对较低的计算成本。
High-order tuners are algorithms that show promise in achieving greater efficiency than classic gradient-based algorithms in identifying the parameters of parametric models and/or in facilitating the progress of a control or optimization algorithm whose adaptive behavior relies on such models. For high-order tuners, robust stability properties, namely uniform global asymptotic (and exponential) stability, currently rely on a persistent excitation (PE) condition. In this work, we establish such stability properties with a novel analysis based on a Matrosov theorem and then show that the PE requirement can be relaxed via a concurrent learning technique driven by sampled data points that are sufficiently rich. We show numerically that concurrent learning may greatly improve efficiency. We incorporate reset methods that preserve the stability guarantees while providing additional improvements that may be relevant in applications that demand highly accurate parameter estimates at relatively low additional cost in computation.