论文标题

AAMDRL:通过深入增强学习的增强资产管理

AAMDRL: Augmented Asset Management with Deep Reinforcement Learning

论文作者

Benhamou, Eric, Saltiel, David, Ungari, Sandrine, Mukhopadhyay, Abhishek, Atif, Jamal

论文摘要

代理可以通过顺序,非平稳和非均匀观察的嘈杂和自我适应环境有效地学习吗?通过交易机器人,我们说明了深入的增强学习(DRL)如何应对这一挑战。我们的贡献是三重的:(i)使用上下文信息在DRL中也称为增强状态,(ii)一个时期的观察和行动之间对资产管理环境更现实的行动之间的影响,(iii)实施一种新的重复的火车测试方法,称为Walk forld Anallagt,在精神中类似于时间序列的交叉验证。尽管我们的实验是关于交易机器人的,但可以很容易地将其转换为其他机器人环境,这些机器人环境在顺序环境中使用政权变化和嘈杂的数据进行操作。我们对有兴趣寻找对冲策略的最佳投资组合的增强资产经理的实验表明,AAMDRL可实现较高的回报并降低风险。

Can an agent learn efficiently in a noisy and self adapting environment with sequential, non-stationary and non-homogeneous observations? Through trading bots, we illustrate how Deep Reinforcement Learning (DRL) can tackle this challenge. Our contributions are threefold: (i) the use of contextual information also referred to as augmented state in DRL, (ii) the impact of a one period lag between observations and actions that is more realistic for an asset management environment, (iii) the implementation of a new repetitive train test method called walk forward analysis, similar in spirit to cross validation for time series. Although our experiment is on trading bots, it can easily be translated to other bot environments that operate in sequential environment with regime changes and noisy data. Our experiment for an augmented asset manager interested in finding the best portfolio for hedging strategies shows that AAMDRL achieves superior returns and lower risk.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源