학술논문

Work-Efficient Parallel Derandomization I: Chernoff-like Concentrations via Pairwise Independence
Document Type
Working Paper
Source
Subject
Computer Science - Data Structures and Algorithms
Language
Abstract
We present a novel technique for work-efficient parallel derandomization, for algorithms that rely on the concentration of measure bounds such as Chernoff, Hoeffding, and Bernstein inequalities. Our method increases the algorithm's computational work and depth by only polylogarithmic factors. Before our work, the only known method to obtain parallel derandomization with such strong concentrations was by the results of [Motwani, Naor, and Naor FOCS'89; Berger and Rompel FOCS'89], which perform a binary search in a $k$-wise independent space for $k=poly(\log n)$. However, that method blows up the computational work by a high $poly(n)$ factor and does not yield work-efficient parallel algorithms. Their method was an extension of the approach of [Luby FOCS'88], which gave a work-efficient derandomization but was limited to algorithms analyzed with only pairwise independence. Pushing the method from pairwise to the higher $k$-wise analysis resulted in the $poly(n)$ factor computational work blow-up. Our work can be viewed as an alternative extension from the pairwise case, which yields the desired strong concentrations while retaining work efficiency up to logarithmic factors. Our approach works by casting the problem of determining the random variables as an iterative process with $poly(\log n)$ iterations, where different iterations have independent randomness. This is done so that for the desired concentrations, we need only pairwise independence inside each iteration. In particular, we model each binary random variable as a result of a gradual random walk, and our method shows that the desired Chernoff-like concentrations about the endpoints of these walks can be boiled down to some pairwise analysis on the steps of these random walks in each iteration (while having independence across iterations).