论文标题

部分可观测时空混沌系统的无模型预测

Federated Reinforcement Learning for Real-Time Electric Vehicle Charging and Discharging Control

论文作者

Zhang, Zixuan, Jiang, Yuning, Shi, Yuanming, Shi, Ye, Chen, Wei

论文摘要

随着移动储能技术的最新进展,电动汽车(EV)已成为智能电网的关键部分。当电动汽车参与需求响应计划时,通过充分利用实时定价信号可以大大降低充电成本。但是,在动态环境中存在许多随机因素,为设计最佳的充电/放电控制策略带来了重大挑战。本文在动态环境下为不同的EV用户开发了最佳的EV充电/放电控制策略,以最大程度地提高EV用户的好处。我们首先将此问题作为马尔可夫决策过程(MDP)。然后,我们将具有不同行为的电动汽车用户视为不同环境中的代理。此外,提出了一种基于水平的联合加固学习(HFRL)的方法,以适合各种用户的行为和动态环境。这种方法可以学习最佳的充电/放电控制策略,而无需共享用户的配置文件。仿真结果表明,提出的实时EV充电/放电控制策略可以在各种随机因素之间表现良好。

With the recent advances in mobile energy storage technologies, electric vehicles (EVs) have become a crucial part of smart grids. When EVs participate in the demand response program, the charging cost can be significantly reduced by taking full advantage of the real-time pricing signals. However, many stochastic factors exist in the dynamic environment, bringing significant challenges to design an optimal charging/discharging control strategy. This paper develops an optimal EV charging/discharging control strategy for different EV users under dynamic environments to maximize EV users' benefits. We first formulate this problem as a Markov decision process (MDP). Then we consider EV users with different behaviors as agents in different environments. Furthermore, a horizontal federated reinforcement learning (HFRL)-based method is proposed to fit various users' behaviors and dynamic environments. This approach can learn an optimal charging/discharging control strategy without sharing users' profiles. Simulation results illustrate that the proposed real-time EV charging/discharging control strategy can perform well among various stochastic factors.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源