Konferensartikel

Reinforcement Learning for Thermostatically Controlled Loads Control using Modelica and Python

Oleh Lukianykhin
Ukrainian Catholic University

Tetiana Bogodorova
Rensselaer Polytechnic Institute

Ladda ner artikelhttps://doi.org/10.3384/ecp202017431

Ingår i: Proceedings of Asian Modelica Conference 2020, Tokyo, Japan, October 08-09, 2020

Linköping Electronic Conference Proceedings 174:4, s. 31-40

Visa mer +

Publicerad: 2020-11-02

ISBN: 978-91-7929-775-6

ISSN: 1650-3686 (tryckt), 1650-3740 (online)

Abstract

The aim of the project is to investigate and assess opportunities for applying reinforcement learning (RL) for power system control. As a proof of concept (PoC), voltage control of thermostatically controlled loads (TCLs) for power consumption regulation was developed using Modelica-based pipeline. The Q-learning RL algorithm has been validated for deterministic and stochastic initialization of TCLs. The latter modelling is closer to real grid behaviour, which challenges the control development, considering the stochastic nature of load switching. In addition, the paper shows the influence of Q-learning parameters, including discretization of state-action space, on the controller performance.

Nyckelord

Modelica, Python, Reinforcement Learning, Q-learning, Thermostatically Controlled Loads, Power System, Demand Response, Dymola Open AI Gym, JModelica.org, OpenModelica

Referenser

Inga referenser tillgängliga

Citeringar i Crossref