This article explores the impact of distribution shift in traffic signal control using reinforcement learning agents. Distribution shift occurs when the data distribution during deployment (i.e., test data) differs from the distribution of the training data, affecting model performance. Through simulations based on real-world traffic data, we analyze two key components of this shift: Kullback–Leibler (KL) divergence in traffic patterns and total traffic volume. Our results show that when the volume of vehicles increases by 1000 from the training volume, the average delay increases by 103%, while a 0.1 increase in KL divergence increases the delay by 58%. This work shows the measurable impact of distribution shift on artificial intelligence agent-based traffic signal system performance.
QC 20250424