학술논문

Learning a Deep Motion Planning Model for Autonomous Driving
Document Type
Conference
Source
2018 IEEE Intelligent Vehicles Symposium (IV) Intelligent Vehicles Symposium (IV), 2018 IEEE. :1137-1142 Jun, 2018
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Feature extraction
Planning
Neural networks
Computer architecture
Roads
Autonomous vehicles
Logic gates
autonomous driving
cascaded neural network
CNN
deep motion planning
LSTM
Language
Abstract
To deal with the issue of computational complexity and robustness of traditional motion planning methods for autonomous driving, an end-to-end motion planning model based on a deep cascaded neural network is proposed in this paper. The model can directly predict the driving parameters from the input sequence images. We combine two classical deep learning models including the convolution neural network (CNN) and the long short-term memory (LSTM) which are used to extract spatial and temporary features of the input images, respectively. The proposed model can fit the nonlinear relationship between the input sequence images and the output motion parameters for making the end-to-end planning. The experiments are conducted using the data collected from a driving simulator. Experimental results show that the proposed method can efficiently learn humans' driving behaviors, adapt to different roads, and has a better robustness performance than some existing methods.