论文标题
学习一个深层生成模型,例如程序:免费类别
Learning a Deep Generative Model like a Program: the Free Category Prior
论文作者
论文摘要
人类超越了大多数其他动物的认知能力,使我们可以将概念“块”成单词,然后将单词结合在一起以结合概念。在此过程中,我们使“无限使用有限的含义”,使我们能够快速学习新概念,并在彼此中嵌套概念。尽管计划诱导和综合仍然是人工智能基础理论的核心,但直到最近,社区才能向前发展,试图将程序学习用作基准任务本身。因此,认知科学界经常假设,如果大脑具有相当于通用计算机的模拟和推理功能,则必须采用串行的象征性表示。在这里,我们面对这一假设,并提供了一个反例,其中通过网络结构表达组成性:以程序为先前的免费类别。我们展示了我们的形式主义如何允许神经网络充当概率计划中的原始作用。我们学习程序结构和模型参数端到端。
Humans surpass the cognitive abilities of most other animals in our ability to "chunk" concepts into words, and then combine the words to combine the concepts. In this process, we make "infinite use of finite means", enabling us to learn new concepts quickly and nest concepts within each-other. While program induction and synthesis remain at the heart of foundational theories of artificial intelligence, only recently has the community moved forward in attempting to use program learning as a benchmark task itself. The cognitive science community has thus often assumed that if the brain has simulation and reasoning capabilities equivalent to a universal computer, then it must employ a serialized, symbolic representation. Here we confront that assumption, and provide a counterexample in which compositionality is expressed via network structure: the free category prior over programs. We show how our formalism allows neural networks to serve as primitives in probabilistic programs. We learn both program structure and model parameters end-to-end.