학술논문

Interpreting Deep-Learned Error-Correcting Codes
Document Type
Conference
Source
2022 IEEE International Symposium on Information Theory (ISIT) Information Theory (ISIT), 2022 IEEE International Symposium on. :2457-2462 Jun, 2022
Subject
Communication, Networking and Broadcast Technologies
Training
Deep learning
Neural networks
Turbo codes
Robustness
Decoding
Mixed integer linear programming
Language
ISSN
2157-8117
Abstract
Deep learning has been used recently to learn error-correcting encoders and decoders which may improve upon previously known codes in certain regimes. The encoders and decoders are learned "black-boxes", and interpreting their behavior is of interest both for further applications and for incorporating this work into coding theory. Understanding these codes provides a compelling case study for Explainable Artificial Intelligence (XAI): since coding theory is a well-developed and quantitative field, the interpretability problems that arise differ from those traditionally considered. We develop post-hoc interpretability techniques to analyze the deep-learned, autoencoder-based encoders of TurboAE-binary codes, using influence heatmaps, mixed integer linear programming (MILP), Fourier analysis, and property testing. We compare the learned, interpretable encoders combined with BCJR decoders to the original black-box code.