论文标题

关于智力的出现的简约和自我矛盾的原则

On the Principles of Parsimony and Self-Consistency for the Emergence of Intelligence

论文作者

Ma, Yi, Tsao, Doris, Shum, Heung-Yeung

论文摘要

在深层网络和人工智能复兴的十年中,我们提出了一个理论框架,该框架阐明了一般智力的更大范围内的深层网络。我们介绍了两个基本原则,即简短和自愿,它们解决了有关情报的两个基本问题:分别学习和学习的方法。我们认为,这两个原则是智力,人造或自然出现的基石。尽管这两个原则具有丰富的古典根源,但我们认为它们可以以完全可衡量和可计算的方式重新说明。更具体地说,这两个原理导致了有效,有效的计算框架,即压缩的闭环转录,它统一并解释了现代深层网络和许多人工智能实践的演变。尽管我们主要以视觉数据的建模为例,但我们认为这两个原则将统一对自主智能系统的广泛家庭的理解,并为理解大脑提供了框架。

Ten years into the revival of deep networks and artificial intelligence, we propose a theoretical framework that sheds light on understanding deep networks within a bigger picture of Intelligence in general. We introduce two fundamental principles, Parsimony and Self-consistency, that address two fundamental questions regarding Intelligence: what to learn and how to learn, respectively. We believe the two principles are the cornerstones for the emergence of Intelligence, artificial or natural. While these two principles have rich classical roots, we argue that they can be stated anew in entirely measurable and computable ways. More specifically, the two principles lead to an effective and efficient computational framework, compressive closed-loop transcription, that unifies and explains the evolution of modern deep networks and many artificial intelligence practices. While we mainly use modeling of visual data as an example, we believe the two principles will unify understanding of broad families of autonomous intelligent systems and provide a framework for understanding the brain.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源