학술논문

Distribution Aware Testing Framework for Deep Neural Networks
Document Type
Periodical
Source
IEEE Access Access, IEEE. 11:119481-119505 2023
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Testing
Data models
Artificial neural networks
Training
Predictive models
Uncertainty
Training data
Deep learning
Data distribution
deep learning testing
explainability
test selection and prioritization
uncertainty
Language
ISSN
2169-3536
Abstract
The increasing use of deep learning (DL) in safety-critical applications highlights the critical need for systematic and effective testing to ensure system reliability and quality. In this context, researchers have conducted various DL testing studies to identify weaknesses in Deep Neural Network (DNN) models, including exploring test coverage, generating challenging test inputs, and test selection. In this study, we propose a generic DNN testing framework that takes into consideration the distribution of test data and prioritizes them based on their potential to cause incorrect predictions by the tested DNN model. We evaluated the proposed framework using the image classification as a use case. We conducted empirical evaluations by implementing each phase with carefully chosen methods. We employed Variational Autoencoders to identify and eliminate out-of-distribution data from the test datasets. Additionally, we prioritize test data that increase uncertainty in the model, as these cases are more likely to reveal potential faults. The elimination of out-of-distribution data enables a more focused analysis to uncover the sources of DNN failures while using prioritized test data reduces the cost of test data labeling. Furthermore, we explored the use of post-hoc explainability methods to identify the cause of incorrect predictions, a process similar to debugging. This study can be a prelude to incorporating explainability methods into the model development process after testing.