论文标题
关于Sejnowski的“深度学习在人工智能中的不合理有效性”的评论[ARXIV:2002.04806]
Comments on Sejnowski's "The unreasonable effectiveness of deep learning in artificial intelligence" [arXiv:2002.04806]
论文作者
论文摘要
特里·塞诺夫斯基(Terry Sejnowski)的2020年论文[Arxiv:2002.04806]的标题为“人工智能中深度学习的不合理有效性”。但是,本文并没有试图回答隐含的问题,即为什么深卷积神经网络(DCNNS)可以近似许多映射,以使其训练以建模。尽管有详细的数学分析,但考虑到这些网络的使用方式,这些简短的论文试图以不同的方式看待问题,这些功能的子集可以通过培训(从原始功能空间中的某个位置开始)以及这些网络实际应用的功能。
Terry Sejnowski's 2020 paper [arXiv:2002.04806] is entitled "The unreasonable effectiveness of deep learning in artificial intelligence". However, the paper doesn't attempt to answer the implied question of why Deep Convolutional Neural Networks (DCNNs) can approximate so many of the mappings that they have been trained to model. While there are detailed mathematical analyses, this short paper attempts to look at the issue differently, considering the way that these networks are used, the subset of these functions that can be achieved by training (starting from some location in the original function space), as well as the functions to which these networks will actually be applied.