학술논문
Dynamic programming for semi-Markov modulated SDEs.
Document Type
Journal
Author
Azevedo, N. (P-MINH-PLE) AMS Author Profile; Pinheiro, D. (1-CUNYB) AMS Author Profile; Pinheiro, S. (1-CUNYQB-MCS) AMS Author Profile
Source
Subject
49 Calculus of variations and optimal control; optimization -- 49L Hamilton-Jacobi theories, including dynamic programming
49L25Viscosity solutions
60Probability theory and stochastic processes -- 60K Special processes
60K15Markov renewal processes, semi-Markov processes
91Game theory, economics, social and behavioral sciences -- 91G Mathematical finance
91G80Financial applications of other theories
93Systems theory; control -- 93E Stochastic systems and control
93E20Optimal stochastic control
49L25
60
60K15
91
91G80
93
93E20
Language
English
Abstract
This paper contributes to the vast literature on stochastic optimal control by extending the dynamic programming principle to the case when the state variable dynamics are determined by a diffusive stochastic differential equation whose coefficients depend on a semi-Markov process with a finite state space. This principle is then used to derive the corresponding Hamilton-Jacobi-Bellman equation and to characterize the value function of the stochastic optimal control under consideration as a viscosity solution of such an equation. A verification theorem is also provided. The results are illustrated on a consumption-investment problem when the asset prices evolve according to a semi-Markov modulated SDE. \par This formalism is well suited for many applications, when both the state variable dynamics and the objective functional depend on a set of known `unknowns' occurring at random instants of time, encapsulated here by the components of the semi-Markov process. In particular in finance, semi-Markov processes allow the asset price dynamics to switch between different states of the financial market, and other modeling flexibilities.