论文标题
Marlui:自适应UIS的多代理增强学习
MARLUI: Multi-Agent Reinforcement Learning for Adaptive UIs
论文作者
论文摘要
自适应用户界面(UIS)会自动更改接口,以更好地支持用户的任务。最近,机器学习技术使过渡到更强大,更复杂的自适应UIS。但是,自适应用户界面的核心挑战是依赖高质量用户数据,必须脱机为每个任务收集。我们将UI适应为多代理增强学习问题,以克服这一挑战。在我们的公式中,用户代理模仿了真实的用户,并学会了与UI进行交互。同时,接口代理学习了UI改编以最大程度地提高用户代理的性能。接口代理从用户代理的行为中学习任务结构,并且可以支持用户代理完成其任务。我们的方法产生的适应性策略仅在模拟中学习,因此不需要真实的用户数据。我们的实验表明,学习的政策可以推广到真实的用户,并通过数据驱动的监督学习基准实现PAR性能。
Adaptive user interfaces (UIs) automatically change an interface to better support users' tasks. Recently, machine learning techniques have enabled the transition to more powerful and complex adaptive UIs. However, a core challenge for adaptive user interfaces is the reliance on high-quality user data that has to be collected offline for each task. We formulate UI adaptation as a multi-agent reinforcement learning problem to overcome this challenge. In our formulation, a user agent mimics a real user and learns to interact with a UI. Simultaneously, an interface agent learns UI adaptations to maximize the user agent's performance. The interface agent learns the task structure from the user agent's behavior and, based on that, can support the user agent in completing its task. Our method produces adaptation policies that are learned in simulation only and, therefore, does not need real user data. Our experiments show that learned policies generalize to real users and achieve on par performance with data-driven supervised learning baselines.