kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Deep Reinforcement Learning Based Traffic Signal Control: A Comparative Analysis
Cho Chun Shik Graduate School of Mobility, Korea Advanced Institute of Science and Technology, Daejeon 34051 South Korea.
Cho Chun Shik Graduate School of Mobility, Korea Advanced Institute of Science and Technology, Daejeon 34051 South Korea.
KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Transport planning.ORCID iD: 0000-0002-2141-0389
2023 (English)In: 14th International Conference on Ambient Systems, Networks and Technologies Networks, ANT 2023 and The 6th International Conference on Emerging Data and Industry 4.0, EDI40 2023, Elsevier BV , 2023, p. 275-282Conference paper, Published paper (Refereed)
Abstract [en]

More recently, the advancement of deep learning techniques enables reinforcement learning (RL) to handle high-dimensional decision-making problems. It brings increasing attention in transport areas. Some work has applied them to solve the intractable traffic signal control (TSC) problem and achieved promising results. However, very few research comprehensively investigates the impacts of key design elements, including state definitions, reward functions and deep reinforcement learning (DRL) methods, on TSC performance. To fill this research gap, this paper first selects commonly used design elements from existing literature. Then, we compare their learning stability and control performance at an isolated intersection under different scenarios via simulation experiments. The experimental results show that the quantitative state (e.g., the number of vehicles on lanes) and image-like state (e.g., vehicle position and speed) have no significant impact on the performance under different traffic demands. However, the impact of various reward functions on performance is vivid, especially under high traffic demand. Also, high-resolution vehicular network data may not be possible to improve control performance versus ordinary camera data. In addition, the value-based DRL algorithms outperform the policy-based algorithm and traditional TSC control methods. The findings of this research would provide insights and guidance for transport engineers to design an efficient DRL-based TSC system in a real traffic environment.

Place, publisher, year, edition, pages
Elsevier BV , 2023. p. 275-282
Keywords [en]
Deep learning, Reinforcement learning (RL), Simulation, Traffic signal control (TSC)
National Category
Transport Systems and Logistics
Identifiers
URN: urn:nbn:se:kth:diva-338609DOI: 10.1016/j.procs.2023.03.036Scopus ID: 2-s2.0-85161476980OAI: oai:DiVA.org:kth-338609DiVA, id: diva2:1809825
Conference
14th International Conference on Ambient Systems, Networks and Technologies Networks, ANT 2023 and The 6th International Conference on Emerging Data and Industry 4.0, EDI40 2023, Leuven, Belgium, Mar 15 2023 - Mar 17 2023
Note

QC 20231106

Available from: 2023-11-06 Created: 2023-11-06 Last updated: 2023-11-06Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Ma, Zhenliang

Search in DiVA

By author/editor
Ma, Zhenliang
By organisation
Transport planning
Transport Systems and Logistics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 48 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf