论文标题

Decisivenets:训练深层的关联记忆以解决复杂的机器学习问题

DecisiveNets: Training Deep Associative Memories to Solve Complex Machine Learning Problems

论文作者

Gripon, Vincent, Lassance, Carlos, Hacene, Ghouthi Boukli

论文摘要

学习深入的表示以解决复杂的机器学习任务已成为过去几年中的重要趋势。实际上,深层神经网络现在是域中的黄金标准,例如计算机视觉,自然语言处理甚至玩组合游戏。但是,有问题的局限性隐藏在这种令人惊讶的普遍能力背后。除其他事项外,决策的解释性是一个主要问题,尤其是因为深层神经网络由大量可训练的参数组成。此外,计算复杂性很快就会成为一个问题,尤其是在实时或有限资源约束的上下文中。因此,了解如何存储信息,并且该存储对系统的影响仍然是一个重大且开放的问题。在本章中,我们介绍了一种将深层神经网络模型转变为深层关联记忆的方法,更简单,更明确且价格便宜。我们通过实验表明,可以对预测性能进行罚款。由此产生的深层关联记忆是人工智能的出色候选人,更容易理论化和操纵。

Learning deep representations to solve complex machine learning tasks has become the prominent trend in the past few years. Indeed, Deep Neural Networks are now the golden standard in domains as various as computer vision, natural language processing or even playing combinatorial games. However, problematic limitations are hidden behind this surprising universal capability. Among other things, explainability of the decisions is a major concern, especially since deep neural networks are made up of a very large number of trainable parameters. Moreover, computational complexity can quickly become a problem, especially in contexts constrained by real time or limited resources. Therefore, understanding how information is stored and the impact this storage can have on the system remains a major and open issue. In this chapter, we introduce a method to transform deep neural network models into deep associative memories, with simpler, more explicable and less expensive operations. We show through experiments that these transformations can be done without penalty on predictive performance. The resulting deep associative memories are excellent candidates for artificial intelligence that is easier to theorize and manipulate.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源