论文标题
兰伯特:使用多模式伯特的语言和行动学习
lamBERT: Language and Action Learning Using Multimodal BERT
论文作者
论文摘要
最近,由于其在与语言理解相关的任务方面的高性能,变形金刚(BERT)模型的双向编码器表示在自然语言处理领域引起了很多关注。 BERT模型学习语言表示,可以通过不受监督的方式使用大型语料库进行预训练来适应各种任务。这项研究提出了使用多模式BERT(LAMBERT)模型的语言和动作学习,该模型可以通过1)将BERT模型扩展到多模式表示,以及2)将其与增强学习集成在一起。为了验证所提出的模型,在网格环境中进行了一个实验,该实验需要语言理解以使代理人正确地采取行动。结果,与其他模型相比,兰伯特模型在多任务设置和传输设置中获得了更高的奖励,例如基于卷积神经网络的模型和兰伯特模型而没有预训练。
Recently, the bidirectional encoder representations from transformers (BERT) model has attracted much attention in the field of natural language processing, owing to its high performance in language understanding-related tasks. The BERT model learns language representation that can be adapted to various tasks via pre-training using a large corpus in an unsupervised manner. This study proposes the language and action learning using multimodal BERT (lamBERT) model that enables the learning of language and actions by 1) extending the BERT model to multimodal representation and 2) integrating it with reinforcement learning. To verify the proposed model, an experiment is conducted in a grid environment that requires language understanding for the agent to act properly. As a result, the lamBERT model obtained higher rewards in multitask settings and transfer settings when compared to other models, such as the convolutional neural network-based model and the lamBERT model without pre-training.