학술논문

A Structured Multiarmed Bandit Problem and the Greedy Policy
Document Type
Periodical
Source
IEEE Transactions on Automatic Control IEEE Trans. Automat. Contr. Automatic Control, IEEE Transactions on. 54(12):2787-2802 Dec, 2009
Subject
Signal Processing and Analysis
Arm
Infinite horizon
Prototypes
Operations research
Laboratories
Costs
Convergence
Markov decision process (MDP)
Language
ISSN
0018-9286
1558-2523
2334-3303
Abstract
We consider a multiarmed bandit problem where the expected reward of each arm is a linear function of an unknown scalar with a prior distribution. The objective is to choose a sequence of arms that maximizes the expected total (or discounted total) reward. We demonstrate the effectiveness of a greedy policy that takes advantage of the known statistical correlation structure among the arms. In the infinite horizon discounted reward setting, we show that the greedy and optimal policies eventually coincide, and both settle on the best arm. This is in contrast with the Incomplete Learning Theorem for the case of independent arms. In the total reward setting, we show that the cumulative Bayes risk after $T$ periods under the greedy policy is at most $O(\log T)$, which is smaller than the lower bound of $\Omega(\log^{2}T)$ established by Lai for a general, but different, class of bandit problems. We also establish the tightness of our bounds. Theoretical and numerical results show that the performance of our policy scales independently of the number of arms.