Applied Sciences, Vol. 14, Pages 3682: A Deep Reinforcement Learning Approach for Microgrid Energy Transmission Dispatching

1 week ago 16

Applied Sciences, Vol. 14, Pages 3682: A Deep Reinforcement Learning Approach for Microgrid Energy Transmission Dispatching

Applied Sciences doi: 10.3390/app14093682

Authors: Shuai Chen Jian Liu Zhenwei Cui Zhiyu Chen Hua Wang Wendong Xiao

Optimal energy transmission dispatching of microgrid systems involves complicated transmission energy allocation and battery charging/discharging management and remains a difficult and challenging research problem subject to complex operation conditions and action constraints due to the randomness and volatility of new energy. Traditional microgrid transmission dispatching mainly considers the matching of the demand side and the supply side from a macro perspective, without considering the impact of line loss. Therefore, a Hierarchical Deep Q-network (HDQN) approach for microgrid energy dispatching is proposed to address this issue. The approach takes the power flow of each line and the battery charging/discharging behavior as decision variables to minimize the system operation cost. The proposed approach employs a two-layer agent optimization architecture for simultaneously processing the discrete and continuous variables, with one agent making upper layer decisions on the charging and discharging behavior of the batteries, and the other agent making lower layer decisions on the transmission energy allocation for the line. The experimental results indicate that the proposed approach achieves better performance than existing approaches.

Read Entire Article