학술논문

Bandit learning in concave $N$-person games
Document Type
Working Paper
Source
Subject
Computer Science - Computer Science and Game Theory
Computer Science - Machine Learning
Mathematics - Optimization and Control
Primary 91A10, 91A26, secondary 68Q32, 68T02
Language
Abstract
This paper examines the long-run behavior of learning with bandit feedback in non-cooperative concave games. The bandit framework accounts for extremely low-information environments where the agents may not even know they are playing a game; as such, the agents' most sensible choice in this setting would be to employ a no-regret learning algorithm. In general, this does not mean that the players' behavior stabilizes in the long run: no-regret learning may lead to cycles, even with perfect gradient information. However, if a standard monotonicity condition is satisfied, our analysis shows that no-regret learning based on mirror descent with bandit feedback converges to Nash equilibrium with probability $1$. We also derive an upper bound for the convergence rate of the process that nearly matches the best attainable rate for single-agent bandit stochastic optimization.
Comment: 24 pages, 1 figure