학술논문

Resource-Efficient Deep Neural Networks for Automotive Radar Interference Mitigation
Document Type
Periodical
Source
IEEE Journal of Selected Topics in Signal Processing IEEE J. Sel. Top. Signal Process. Selected Topics in Signal Processing, IEEE Journal of. 15(4):927-940 Jun, 2021
Subject
Signal Processing and Analysis
Interference
Radar
Quantization (signal)
Sensors
Radio frequency
Radar signal processing
Training
Automotive radar
binarized convolutional neural networks
discrete weight distributions
interference mitigation
quantization aware training
resource-efficiency
straight-through estimator
uncertainty maps
Language
ISSN
1932-4553
1941-0484
Abstract
Radar sensors are crucial for environment perception of driver assistance systems as well as autonomous vehicles. With a rising number of radar sensors and the so far unregulated automotive radar frequency band, mutual interference is inevitable and must be dealt with. Algorithms and models operating on radar data are required to run the early processing steps on specialized radar sensor hardware. This specialized hardware typically has strict resource-constraints, i.e. a low memory capacity and low computational power. Convolutional Neural Network (CNN)-based approaches for denoising and interference mitigation yield promising results for radar processing in terms of performance. Regarding resource-constraints, however, CNNs typically exceed the hardware's capacities by far. In this paper we investigate quantization techniques for CNN-based denoising and interference mitigation of radar signals. We analyze the quantization of (i) weights and (ii) activations of different CNN-based model architectures. This quantization results in reduced memory requirements for model storage and during inference. We compare models with fixed and learned bit-widths and contrast two different methodologies for training quantized CNNs, i.e. the straight-through gradient estimator and training distributions over discrete weights. We illustrate the importance of structurally small real-valued base models for quantization and show that learned bit-widths yield the smallest models. We achieve a memory reduction of around 80% compared to the real-valued baseline. Due to practical reasons, however, we recommend the use of 8 bits for weights and activations, which results in models that require only 0.2 megabytes of memory.