학술논문

Convergence and Sample Complexity of Gradient Methods for the Model-Free Linear–Quadratic Regulator Problem
Document Type
Periodical
Source
IEEE Transactions on Automatic Control IEEE Trans. Automat. Contr. Automatic Control, IEEE Transactions on. 67(5):2435-2450 May, 2022
Subject
Signal Processing and Analysis
Convergence
Complexity theory
Optimization
Mathematical model
Heuristic algorithms
Control theory
Regulators
Data-driven control
gradient descent
gradient-flow dynamics
linear–quadratic regulator (LQR)
model-free control
nonconvex optimization
Polyak–Łojasiewicz inequality
random search method
reinforcement learning (RL)
sample complexity
Language
ISSN
0018-9286
1558-2523
2334-3303
Abstract
Model-free reinforcement learning attempts to find an optimal control action for an unknown dynamical system by directly searching over the parameter space of controllers. The convergence behavior and statistical properties of these approaches are often poorly understood because of the nonconvex nature of the underlying optimization problems and the lack of exact gradient computation. In this article, we take a step toward demystifying the performance and efficiency of such methods by focusing on the standard infinite-horizon linear–quadratic regulator problem for continuous-time systems with unknown state-space parameters. We establish exponential stability for the ordinary differential equation (ODE) that governs the gradient-flow dynamics over the set of stabilizing feedback gains and show that a similar result holds for the gradient descent method that arises from the forward Euler discretization of the corresponding ODE. We also provide theoretical bounds on the convergence rate and sample complexity of the random search method with two-point gradient estimates. We prove that the required simulation time for achieving $\epsilon$-accuracy in the model-free setup and the total number of function evaluations both scale as $\log \, (1/\epsilon)$.