학술논문

An Adaptive Route Guidance Model Considering the Effect of Traffic Signals Based on Deep Reinforcement Learning
Document Type
Periodical
Source
IEEE Intelligent Transportation Systems Magazine IEEE Intell. Transport. Syst. Mag. Intelligent Transportation Systems Magazine, IEEE. 16(3):21-34 Jun, 2024
Subject
Transportation
Aerospace
Computing and Processing
Components, Circuits, Devices and Systems
Fields, Waves and Electromagnetics
Adaptation models
Routing
Heuristic algorithms
Adaptive systems
Navigation
Vehicle dynamics
Behavioral sciences
Real-time systems
Deep reinforcement learning
Traffic control
Language
ISSN
1939-1390
1941-1197
Abstract
Navigation or route guidance systems are designed to provide drivers with real-time travel information and the associated recommended routes for their trips. Classical route choice models typically rely on utility theory to represent drivers’ route choice behavior. Such choices, however, may not be optimal from both the individual and the system perspectives. This is simply due to the fact that drivers usually have imperfect knowledge about the time-varying traffic conditions. In this article, we explore and propose a new model-free deep reinforcement learning (DRL) approach to solving the adaptive route guidance problem based on microsimulation. The proposed approach consists of three interconnected algorithms, including a network edge labeling algorithm, a routing plan identification algorithm, and an adaptive route guidance algorithm. Simulation experiments on both a toy network and a real-world network of Suzhou, China, are performed to demonstrate the effectiveness of the proposed approach in terms of guiding a single vehicle as well as multiple vehicles through complex traffic environments. Comparative results confirm that the DRL approach outperforms the traditional shortest path method by further reducing the average travel time in the network.