학술논문

Distributed Online Linear Regressions
Document Type
Periodical
Source
IEEE Transactions on Information Theory IEEE Trans. Inform. Theory Information Theory, IEEE Transactions on. 67(1):616-639 Jan, 2021
Subject
Communication, Networking and Broadcast Technologies
Signal Processing and Analysis
Linear regression
Upper bound
Prediction algorithms
Convex functions
Distributed algorithms
Distributed databases
Nickel
online linear regression
long-term constraints
regret
Language
ISSN
0018-9448
1557-9654
Abstract
We study online linear regression problems in a distributed setting, where the data is spread over a network. In each round, each network node proposes a linear predictor, with the objective of fitting the network-wide data. It then updates its predictor for the next round according to the received local feedback and information received from neighboring nodes. The predictions made at a given node are assessed through the notion of regret, defined as the difference between their cumulative network-wide square errors and those of the best off-line network-wide linear predictor. Various scenarios are investigated, depending on the nature of the local feedback (full information or bandit feedback), on the set of available predictors (the decision set), and the way data is generated (by an oblivious or adaptive adversary). We propose simple and natural distributed regression algorithms, involving, at each node and in each round, a local gradient descent step and a communication and averaging step where nodes aim at aligning their predictors to those of their neighbors. We establish regret upper bounds typically in ${\mathcal{ O}}(T^{3/4})$ when the decision set is unbounded and in ${\mathcal{ O}}(\sqrt {T})$ in case of bounded decision set.