학술논문

Learning in Mean-Field Games
Document Type
Periodical
Source
IEEE Transactions on Automatic Control IEEE Trans. Automat. Contr. Automatic Control, IEEE Transactions on. 59(3):629-644 Mar, 2014
Subject
Signal Processing and Analysis
Oscillators
Games
Approximation methods
Mathematical model
Sociology
Statistics
Equations
Mean-field game
nonlinear systems
phase transition
stochastic learning
synchronization
Language
ISSN
0018-9286
1558-2523
2334-3303
Abstract
The purpose of this paper is to show how insight obtained from a mean-field model can be used to create an architecture for approximate dynamic programming (ADP) for a certain class of games comprising of a large number of agents. The general technique is illustrated with the aid of a mean-field oscillator game model introduced in our prior work. The states of the model are interpreted as the phase angles for a collection of nonhomogeneous oscillators, and in this way the model may be regarded as an extension of the classical coupled oscillator model of Kuramoto. The paper introduces ADP techniques for design and adaptation (learning) of approximately optimal control laws for this model. For this purpose, a parameterization is proposed, based on an analysis of the mean-field PDE model for the game. In an offline setting, a Galerkin procedure is introduced to choose the optimal parameters while in an online setting, a steepest descent algorithm is proposed. The paper provides a detailed analysis of the optimal parameter values as well as the Bellman error with both the Galerkin approximation and the online algorithm. Finally, a phase transition result is described for the large population limit when each oscillator uses the approximately optimal control law. A critical value of the control penalty parameter is identified: above this value, the oscillators are incoherent; and below this value (when control is sufficiently cheap) the oscillators synchronize. These conclusions are illustrated with results from numerical experiments.