학술논문

Trouble-Shooting at GAN Point: Improving Functional Safety in Deep Learning Accelerators
Document Type
Periodical
Source
IEEE Transactions on Computers IEEE Trans. Comput. Computers, IEEE Transactions on. 72(8):2194-2208 Aug, 2023
Subject
Computing and Processing
Circuit faults
Deep learning
Safety
Neural networks
Mission critical systems
Generative adversarial networks
Classification algorithms
Functional safety
neural network accelerator
generative adversarial networks
memory faults
stuck-at faults
Language
ISSN
0018-9340
1557-9956
2326-3814
Abstract
The proliferation of Deep Neural Networks (DNNs) in real-time mission critical applications has promoted the implementation of custom-built DNN inference accelerators. These accelerators require a considerable amount of on-chip memory to store millions of trained DNN parameters for executing inference at the edge. Drastic technology scaling in recent years have made these memory circuits highly vulnerable to faults due to various reasons like aging, latent defects, single event upsets, etc. Such faults are highly detrimental to the classification accuracy of the DNN accelerator, leading to the crucial Functional Safety (FuSa) violation. This can eventuate to catastrophic circumstances, when used in mission-critical applications. In order to detect such violations in mission mode, we propose to generate a set of functional test patterns by leveraging the concept of Generative Adversarial Networks (GANs), that are independent of the DNN model and the accelerator characteristics. Our experimental results demonstrate that, the generated test patterns significantly improve FuSa violation detection coverage by up to 130.28%, compared to existing techniques. To the best of our knowledge, this is the first work that generates GAN-based test patterns in order to perform FuSa violation detection in mission-critical DNN accelerators.