Towards Microgrid Resilience Enhancement via Mobile Power Sources and Repair Crews: A Multi-Agent Reinforcement Learning Approach

IEEE Transactions on Power Systems - Tập 39 Số 1 - Trang 1329-1345 - 2024
Yi Wang1, Dawei Qiu1, Fei Teng1, Goran Strbac1
1Department of Electrical and Electronic Engineering, Imperial College London, London, U.K.

Tóm tắt

Mobile power sources (MPSs) have been gradually deployed in microgrids as critical resources to coordinate with repair crews (RCs) towards resilience enhancement owing to their flexibility and mobility in handling the complex coupled power-transport systems. However, previous work solves the coordinated dispatch problem of MPSs and RCs in a centralized manner with the assumption that the communication network is still fully functioning after the event. However, there is growing evidence that certain extreme events will damage or degrade communication infrastructure, which makes centralized decision making impractical. To fill this gap, this paper formulates the resilience-driven dispatch problem of MPSs and RCs in a decentralized framework. To solve this problem, a hierarchical multi-agent reinforcement learning method featuring a two-level framework is proposed, where the high-level action is used to switch decision-making between power and transport networks, and the low-level action constructed via a hybrid policy is used to compute continuous scheduling and discrete routing decisions in power and transport networks, respectively. The proposed method also uses an embedded function encapsulating system dynamics to enhance learning stability and scalability. Case studies based on IEEE 33-bus and 69-bus power networks are conducted to validate the effectiveness of the proposed method in load restoration.

Từ khóa

#Mobile power sources #repair crews #microgrid resilience #power-transport network #hierarchical multi-agent reinforcement learning

Tài liệu tham khảo

10.1109/JPROC.2017.2679040

10.1109/TSG.2020.3048234

10.1109/TIA.2020.2972854

10.1016/j.rser.2020.110313

10.1109/TPWRS.2018.2855102

10.1109/JSYST.2020.3048817

10.1109/TSG.2016.2605692

10.1109/TSG.2020.3003595

10.1016/j.apenergy.2021.117921

10.1109/TSG.2018.2889347

10.1109/TSG.2019.2899353

10.1109/TII.2020.2976831

10.1109/TSG.2020.3001952

10.1016/j.rser.2021.111636

10.1109/TIA.2021.3117926

10.1109/TPWRS.2022.3164589

10.1109/TPWRS.2017.2650779

10.1016/j.apenergy.2017.05.012

10.1109/TSG.2019.2932009

10.1016/j.apenergy.2021.118234

Sutton, 2018, Reinforcement Learning: An Introduction

10.1109/TPWRS.2021.3056543

10.1109/TSG.2022.3160387

10.1109/TPWRS.2021.3078446

10.1109/PESGM41954.2020.9282132

10.1109/TSG.2020.3034827

10.1016/j.apenergy.2022.118575

10.1109/TPWRS.2021.3076128

10.1109/TPWRS.2015.2429656

Yuanqing, 2004, Theory and application study of the road traffic impedance function, J. Highway Transp. Res. Develop., 21, 82

10.1109/MPER.1989.4310642

10.1109/TSG.2021.3088006

10.1007/978-3-319-28929-8

10.1109/TSG.2019.2933502

10.1109/TPWRS.2021.3110723

10.1109/TITS.2020.3003163

10.1109/TSG.2022.3186931

10.1109/jiot.2022.3176604

10.1016/j.apenergy.2022.118790

10.1109/TII.2022.3166215

Yu, 2021, The surprising effectiveness of mappo in cooperative, multi-agent games

10.1016/S0004-3702(99)00052-1

10.1080/14786451.2015.1100196