论文标题
分布式随机匪徒学习具有延迟上下文观察
Distributed Stochastic Bandit Learning with Delayed Context Observation
论文作者
论文摘要
我们考虑了M代理与随机K臂上的上下文强盗的实例进行协作的问题,其中k >> m。代理商的目的是同时最大程度地减少所有代理商对所有代理的累积遗憾。我们考虑在延迟后观察到确切的上下文的设置,并且在选择动作时,代理商不知道上下文,并且只有一组上下文的分布。这种情况在不同的应用程序中产生,在决策时需要预测上下文(例如,天气预报或股票市场预测),一旦获得奖励,可以估算上下文。我们提出了一个基于基于的分布式算法(UCB)的置信度结合(UCB),并证明了线性参数化奖励函数的遗憾和通信范围。我们通过关于合成数据和现实世界Movielens数据的数值模拟验证了算法的性能。
We consider the problem where M agents collaboratively interact with an instance of a stochastic K-armed contextual bandit, where K>>M. The goal of the agents is to simultaneously minimize the cumulative regret over all the agents over a time horizon T. We consider a setting where the exact context is observed after a delay and at the time of choosing the action the agents are unaware of the context and only a distribution on the set of contexts is available. Such a situation arises in different applications where at the time of the decision the context needs to be predicted (e.g., weather forecasting or stock market prediction), and the context can be estimated once the reward is obtained. We propose an Upper Confidence Bound (UCB)-based distributed algorithm and prove the regret and communications bounds for linearly parametrized reward functions. We validated the performance of our algorithm via numerical simulations on synthetic data and real-world Movielens data.