kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A Review of Reinforcement Learning for Controlling Building Energy Systems From a Computer Science Perspective
KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering.ORCID iD: 0000-0002-4851-0785
KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Sustainable Buildings. KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Building Technology and Design. Uponor AB, Hackstavagen 1, S-72132 Västerås, Sweden.ORCID iD: 0000-0001-6266-8485
RISE Res Inst Sweden, Div Digital Syst, Comp Sci, Isafjordsgatan 28 A, S-16440 Kista, Sweden.
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.ORCID iD: 0000-0001-9810-3478
2023 (English)In: Sustainable cities and society, ISSN 2210-6707, Vol. 89, article id 104351Article, review/survey (Refereed) Published
Abstract [en]

Energy efficient control of energy systems in buildings is a widely recognized challenge due to the use of low temperature heating, renewable electricity sources, and the incorporation of thermal storage. Reinforcement Learning (RL) has been shown to be effective at minimizing the energy usage in buildings with maintained thermal comfort despite the high system complexity. However, RL has certain disadvantages that make it challenging to apply in engineering practices. In this review, we take a computer science approach to identifying three main categories of challenges of using RL for control of Building Energy Systems (BES). The three categories are the following: RL in single buildings, RL in building clusters, and multi-agent aspects. For each topic, we analyse the main challenges, and the state-of-the-art approaches to alleviate them. We also identify several future research directions on subjects such as sample efficiency, transfer learning, and the theoretical properties of RL in building energy systems. In conclusion, our review shows that the work on RL for BES control is still in its initial stages. Although significant progress has been made, more research is needed to realize the goal of RL-based control of BES at scale.

Place, publisher, year, edition, pages
Elsevier BV , 2023. Vol. 89, article id 104351
Keywords [en]
Building Energy System, HVAC, Heating, Cooling, Reinforcement learning, Machine learning, RL, ML
National Category
Energy Engineering
Identifiers
URN: urn:nbn:se:kth:diva-323582DOI: 10.1016/j.scs.2022.104351ISI: 000910896200001Scopus ID: 2-s2.0-85144402805OAI: oai:DiVA.org:kth-323582DiVA, id: diva2:1735309
Note

QC 20230208

Available from: 2023-02-08 Created: 2023-02-08 Last updated: 2025-12-18Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Weinberg, DavidWang, QianFischione, Carlo

Search in DiVA

By author/editor
Weinberg, DavidWang, QianFischione, Carlo
By organisation
Civil and Architectural EngineeringSustainable BuildingsBuilding Technology and DesignNetwork and Systems Engineering
In the same journal
Sustainable cities and society
Energy Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 225 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf