학술논문

Autonomous UAV with Learned Trajectory Generation and Control
Document Type
Conference
Source
2019 IEEE International Workshop on Signal Processing Systems (SiPS) Signal Processing Systems (SiPS), 2019 IEEE International Workshop on. :115-120 Oct, 2019
Subject
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Engineering Profession
Signal Processing and Analysis
Conferences
Signal processing
Deep reinforcement learning
continuous trajectory tracking
actor-critic algorithm
unmanned aerial vehicles
Language
ISSN
2374-7390
Abstract
Unmanned aerial vehicle (UAV) technology is a rapidly growing field with tremendous opportunities for research and applications. To achieve true autonomy for UAVs in the absence of remote control, external navigation aids like global navigation satellite systems and radar systems, a minimum energy trajectory planning that considers obstacle avoidance and stability control will be the key. Although this can be formulated as a constrained optimization problem, due to the complicated non-linear relationships between UAV trajectory and thrust control, it is almost impossible to be solved analytically. While deep reinforcement learning is known for its ability to provide model free optimization for complex system through learning, its state space, actions and reward functions must be designed carefully. This paper presents our vision of different layers of autonomy in a UAV system, and our effort in generating and tracking the trajectory both using deep reinforcement learning (DRL). The experimental results show that compared to conventional approaches, the learned trajectory will need 20% less control thrust and 18% less time to reach the target. Furthermore, using the control policy learning by DRL, the UAV will achieve 58.14% less position error and 21.77% less system power.