论文标题

深度学习中的离散和连续的表示和处理:期待

Discrete and continuous representations and processing in deep learning: Looking forward

论文作者

Cartuyvels, Ruben, Spinks, Graham, Moens, Marie-Francine

论文摘要

内容的离散和连续表示(例如,语言或图像的)具有有趣的属性,可以探索用于理解或通过机器对此内容进行推理的有趣属性。该立场论文提出了我们对离散和连续表示的作用及其在深度学习领域的处理的看法。当前的神经网络模型计算连续值数据。信息被压缩到致密的分布式嵌入中。通过鲜明的对比,人类在与语言的交流中使用离散符号。这样的符号代表了世界的压缩版本,该版本从共享的上下文信息中得出其含义。此外,人类推理涉及在认知水平上进行符号操纵,这有助于抽象推理,知识和理解的组成,概括和有效学习。在这些见解的推动下,在本文中,我们认为将离散和连续的表示及其处理结合起来对于建立表现出一般智力形式的系统至关重要。我们建议并讨论几种可以通过包含离散元素来改善当前神经网络的途径,以结合两种类型的表示的优势。

Discrete and continuous representations of content (e.g., of language or images) have interesting properties to be explored for the understanding of or reasoning with this content by machines. This position paper puts forward our opinion on the role of discrete and continuous representations and their processing in the deep learning field. Current neural network models compute continuous-valued data. Information is compressed into dense, distributed embeddings. By stark contrast, humans use discrete symbols in their communication with language. Such symbols represent a compressed version of the world that derives its meaning from shared contextual information. Additionally, human reasoning involves symbol manipulation at a cognitive level, which facilitates abstract reasoning, the composition of knowledge and understanding, generalization and efficient learning. Motivated by these insights, in this paper we argue that combining discrete and continuous representations and their processing will be essential to build systems that exhibit a general form of intelligence. We suggest and discuss several avenues that could improve current neural networks with the inclusion of discrete elements to combine the advantages of both types of representations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源