학술논문

Binary Linear Codes With Optimal Scaling: Polar Codes With Large Kernels
Document Type
Periodical
Source
IEEE Transactions on Information Theory IEEE Trans. Inform. Theory Information Theory, IEEE Transactions on. 67(9):5693-5710 Sep, 2021
Subject
Communication, Networking and Broadcast Technologies
Signal Processing and Analysis
Kernel
Complexity theory
Polar codes
Maximum likelihood decoding
Parity check codes
Channel coding
Capacity planning
Error-correcting codes
polar codes
polarization kernels
quasi-linear complexity
scaling exponent
Language
ISSN
0018-9448
1557-9654
Abstract
We prove that, for the binary erasure channel (BEC), the polar-coding paradigm gives rise to codes that not only approach the Shannon limit but do so under the best possible scaling of their block length as a function of the gap to capacity. This result exhibits the first known family of binary codes that attain both optimal scaling and quasi-linear complexity of encoding and decoding. Our proof is based on the construction and analysis of binary polar codes with large kernels . When communicating reliably at rates within $\varepsilon > 0$ of capacity, the code length $n$ often scales as $O(1/\varepsilon ^{\mu })$ , where the constant $\mu $ is called the scaling exponent . It is known that the optimal scaling exponent is $\mu =2$ , and it is achieved by random linear codes. The scaling exponent of conventional polar codes (based on the $2\times 2$ kernel) on the BEC is $\mu =3.63$ . This falls far short of the optimal scaling guaranteed by random codes. Our main contribution is a rigorous proof of the following result: for the BEC, there exist $\ell \times \ell $ binary kernels, such that polar codes constructed from these kernels achieve scaling exponent $\mu (\ell)$ that tends to the optimal value of 2 as $\ell $ grows. We furthermore characterize precisely how large $\ell $ needs to be as a function of the gap between $\mu (\ell)$ and 2. The resulting binary codes maintain the recursive structure of conventional polar codes, and thereby achieve construction complexity $O(n)$ and encoding/decoding complexity $O(n\log n)$ .