Urban environment and mobility are threatened by traffic congestion due to the growth in the number of vehicles and urbanization. To address this problem, we propose a deep reinforcement learning-based (DRL) urban network geofencing (UNG) strategy for traffic management to improve traffic operations and sustainability. The proposed solution creates a real-time geofence that consists of several sub-networks where dynamic speed limit policies are implemented. The road links in each sub-network share the same speed limit policy in a control cycle. An actor-critic framework is developed to learn the discrete speed limits of sub-networks in a continuous action space, and a reward function is developed based on the average speeds of vehicles on the network. A twin delayed deep deterministic policy gradient (TD3) method is introduced for calibrating the actor-critic networks and solving the overestimation bias problem arising with the function approximation. Based on the traffic simulation of a real-world local network in Shanghai, the performance of the geofencing methods is investigated in various scenarios characterized by different levels of traffic demand and control settings. The findings suggest that the proposed TD3-UNG controller is capable of generating beneficial dynamic speed limit policies to reduce total time spent, emissions, and fuel consumption in various scenarios.
QC 20250711