Bi-level deep reinforcement learning for PEV decision-making guidance by coordinating transportation-electrification coupled systems

Xing, Qiang and Chen, Zhong and Wang, Ruisheng and Zhang, Ziqi (2023) Bi-level deep reinforcement learning for PEV decision-making guidance by coordinating transportation-electrification coupled systems. Frontiers in Energy Research, 10. ISSN 2296-598X

[thumbnail of pubmed-zip/versions/2/package-entries/fenrg-10-944313-r1/fenrg-10-944313.pdf] Text
pubmed-zip/versions/2/package-entries/fenrg-10-944313-r1/fenrg-10-944313.pdf - Published Version

Download (3MB)

Abstract

The random charging and dynamic traveling behaviors of massive plug-in electric vehicles (PEVs) pose challenges to the efficient and safe operation of transportation-electrification coupled systems (TECSs). To realize real-time scheduling of urban PEV fleet charging demand, this paper proposes a PEV decision-making guidance (PEVDG) strategy based on the bi-level deep reinforcement learning, achieving the reduction of user charging costs while ensuring the stable operation of distribution networks (DNs). For the discrete time-series characteristics and the heterogeneity of decision actions, the FEVDG problem is duly decoupled into a bi-level finite Markov decision process, in which the upper-lower layers are used respectively for charging station (CS) recommendation and path navigation. Specifically, the upper-layer agent realizes the mapping relationship between the environment state and the optimal CS by perceiving the PEV charging requirements, CS equipment resources and DN operation conditions. And the action decision output of the upper-layer is embedded into the state space of the lower-layer agent. Meanwhile, the lower-level agent determines the optimal road segment for path navigation by capturing the real-time PEV state and the transportation network information. Further, two elaborate reward mechanisms are developed to motivate and penalize the decision-making learning of the dual agents. Then two extension mechanisms (i.e., dynamic adjustment of learning rates and adaptive selection of neural network units) are embedded into the Rainbow algorithm based on the DQN architecture, constructing a modified Rainbow algorithm as the solution to the concerned bi-level decision-making problem. The average rewards for the upper-lower levels are ¥ -90.64 and ¥ 13.24 respectively. The average equilibrium degree of the charging service and average charging cost are 0.96 and ¥ 42.45, respectively. Case studies are conducted within a practical urban zone with the TECS. Extensive experimental results show that the proposed methodology improves the generalization and learning ability of dual agents, and facilitates the collaborative operation of traffic and electrical networks.

Item Type: Article
Subjects: Souths Book > Energy
Depositing User: Unnamed user with email support@southsbook.com
Date Deposited: 29 Apr 2023 07:17
Last Modified: 25 May 2024 09:37
URI: http://research.europeanlibrarypress.com/id/eprint/745

Actions (login required)

View Item
View Item