论文标题
Allenact:体现AI研究的框架
AllenAct: A Framework for Embodied AI Research
论文作者
论文摘要
体现AI的领域,在该领域中,代理商通过与以自我为中心的观察与环境进行互动来完成任务,随着深度强化学习的出现并增加了计算机视觉,NLP和机器人群落的兴趣。通过创建大量模拟环境(例如AI2-thor,habitat和carla),任务(例如点导航,以下和体现的问题答案)以及相关的排行榜来促进这种增长。尽管这种多样性是有益的和有机的,但它也使社区分散了:需要大量的努力才能做一些像在一种环境中训练的模型并在另一种环境中对其进行测试一样简单的事情。这阻碍了良好的科学。我们介绍了Allenact,这是一个模块化,灵活的学习框架,旨在侧重于体现AI研究的独特要求。 Allenact为越来越多的具体环境,任务和算法的集合提供了一流的支持,提供了最先进模型的复制品,并包括广泛的文档,教程,启动代码和预培训的模型。我们希望我们的框架使体现的AI更容易访问,并鼓励新研究人员加入这一激动人心的领域。该框架可以在以下网址访问:https://allenact.org/
The domain of Embodied AI, in which agents learn to complete tasks through interaction with their environment from egocentric observations, has experienced substantial growth with the advent of deep reinforcement learning and increased interest from the computer vision, NLP, and robotics communities. This growth has been facilitated by the creation of a large number of simulated environments (such as AI2-THOR, Habitat and CARLA), tasks (like point navigation, instruction following, and embodied question answering), and associated leaderboards. While this diversity has been beneficial and organic, it has also fragmented the community: a huge amount of effort is required to do something as simple as taking a model trained in one environment and testing it in another. This discourages good science. We introduce AllenAct, a modular and flexible learning framework designed with a focus on the unique requirements of Embodied AI research. AllenAct provides first-class support for a growing collection of embodied environments, tasks and algorithms, provides reproductions of state-of-the-art models and includes extensive documentation, tutorials, start-up code, and pre-trained models. We hope that our framework makes Embodied AI more accessible and encourages new researchers to join this exciting area. The framework can be accessed at: https://allenact.org/