학술논문

Recognizing Distraction for Assistive Driving by Tracking Body Parts Using Novel Convolutional Neural Network with LGM Classifier Over Random Forest with Improved Accuracy
Document Type
Conference
Source
2023 Intelligent Computing and Control for Engineering and Business Systems (ICCEBS) Intelligent Computing and Control for Engineering and Business Systems (ICCEBS), 2023. :1-5 Dec, 2023
Subject
Bioengineering
Computing and Processing
Engineering Profession
General Topics for Engineers
Robotics and Control Systems
Signal Processing and Analysis
Deep learning
Training
Statistical analysis
Simulation
Software
Classification algorithms
Safety
Novel Convolutional Neural Network with LGM Classifier
Random Forest
Driver's Distraction
Deep Learning Algorithm
Machine Learning
Automatic Recognition
Road Traffic Accidents
Road Safety
Language
Abstract
In the context of identifying driver distractions, the goal of this study is to investigate the degree of accuracy exhibited by the most recent iterations of deep learning algorithms. Both Convolutional Neural Networks Utilising LGM Classifier (CNNLGM) and Random Forest (RF) are compared head-to-head in the research presented here. The investigation required a total of 118 samples, which were then divided equally between two categories consisting of 58 specimens each. Group 1 utilized the CNNLGM Classifier, in contrast to Group 2's utilization of the RF technique. The RF code was implemented using software from Google Colab, and the same program was also used to import the dataset. A pre-test power of 80% and an alpha value of 0.05 were taken into consideration while determining the appropriate size of the sample for the experiment. This was accomplished with the assistance of an online tool for statistical analysis. Previous research provided the necessary information for determining the appropriate size of the sample to use. The findings of the simulation showed that the Novel CNNLGM Classifier achieved an accuracy of 96%, whereas the RF algorithm could only achieve an accuracy of 82%. There was a substantial disparity in the levels of accuracy achieved by the two approaches, as measured by a significance value of 0.001 (p