论文标题
因果,离散时间LTI系统的度量熵
Metric entropy of causal, discrete-time LTI systems
论文作者
论文摘要
在[1]中表明,复发性神经网络(RNN)可以以公制的最佳方式学习 - 离散时间,线性时间不变(LTI)系统。这是通过比较将近似RNN与所考虑的LTI系统类的度量熵进行比较所需的位数来实现的[2,3]。本注的目的是在[2,3]中提供基本的自包式熵证明,并在[2,3]中出现小数学问题。这些校正还会导致[1]中结果中常数的校正(请参阅注释2.5)。
In [1] it is shown that recurrent neural networks (RNNs) can learn - in a metric entropy optimal manner - discrete time, linear time-invariant (LTI) systems. This is effected by comparing the number of bits needed to encode the approximating RNN to the metric entropy of the class of LTI systems under consideration [2, 3]. The purpose of this note is to provide an elementary self-contained proof of the metric entropy results in [2, 3], in the process of which minor mathematical issues appearing in [2, 3] are cleaned up. These corrections also lead to the correction of a constant in a result in [1] (see Remark 2.5).