论文标题
零件:平行学习朝着强大而透明的AI
PaRT: Parallel Learning Towards Robust and Transparent AI
论文作者
论文摘要
本文采用了一种平行的学习方法,以实现稳健和透明的AI。深度神经网络在多个任务上并行培训,其中每个任务仅在网络资源的子集上进行培训。每个子集由网络段组成,可以在特定任务中组合和共享。任务可以与其他任务共享资源,同时拥有与任务相关的独立网络资源。因此,训练有素的网络可以在各种任务上共享相似的表示,同时也可以实现与任务相关的独立表示。以上允许一些至关重要的结果。 (1)我们方法的平行性质否定了灾难性遗忘的问题。 (2)分区的共享更有效地使用网络资源。 (3)我们表明,该网络确实确实通过共享表示形式从其他任务中的某些任务中使用了知识。 (4)通过检查单个与任务相关的和共享表示形式,该模型在多任务设置中提供了网络和任务之间的关系透明度。对拟议方法的评估,例如持续学习,神经体系结构搜索和多任务学习,表明它能够学习强大的表示。这是对多个任务并行训练DL模型的第一个努力。我们的代码可在https://github.com/mahsapaknezhad/part上找到
This paper takes a parallel learning approach for robust and transparent AI. A deep neural network is trained in parallel on multiple tasks, where each task is trained only on a subset of the network resources. Each subset consists of network segments, that can be combined and shared across specific tasks. Tasks can share resources with other tasks, while having independent task-related network resources. Therefore, the trained network can share similar representations across various tasks, while also enabling independent task-related representations. The above allows for some crucial outcomes. (1) The parallel nature of our approach negates the issue of catastrophic forgetting. (2) The sharing of segments uses network resources more efficiently. (3) We show that the network does indeed use learned knowledge from some tasks in other tasks, through shared representations. (4) Through examination of individual task-related and shared representations, the model offers transparency in the network and in the relationships across tasks in a multi-task setting. Evaluation of the proposed approach against complex competing approaches such as Continual Learning, Neural Architecture Search, and Multi-task learning shows that it is capable of learning robust representations. This is the first effort to train a DL model on multiple tasks in parallel. Our code is available at https://github.com/MahsaPaknezhad/PaRT