摘要:To improve the operating efficiency and economic benefits, this article proposes a modified rainbow-based deep reinforcement learning (DRL) strategy to realize the charging station (CS) optimal scheduling. As the charging process is a real-time matching between electric vehicles ‘(EVs) charging demand and CS equipment resources, the CS charging scheduling problem is duly formulated as a finite Markov decision process (FMDP). Considering the multi-stakeholder interaction among EVs, CSs, and distribution networks (DNs), a comprehensive information perception model was constructed to extract the environmental state required by the agent. According to the random behavior characteristics of the EV charging arrival and departure times, the startup of the charging pile control module was regarded as the agent’s action space. To tackle this issue, the modified rainbow approach was utilized to develop a time-scale-based CS scheme to compensate for the resource requirements mismatch on the energy scale. Case studies were conducted within a CS integrated with the photovoltaic and energy storage system. The results reveal that the proposed method effectively reduces the CS operating cost and improves the new energy consumption.