학술논문

Deep Learning Based Multi Pose Human Face Matching System
Document Type
Periodical
Source
IEEE Access Access, IEEE. 12:26046-26061 2024
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Face recognition
Deep learning
YOLO
Real-time systems
Three-dimensional displays
Face detection
Feature extraction
Pattern matching
Pose estimation
Identification of persons
face recognition
pattern matching
YOLO-V5
Language
ISSN
2169-3536
Abstract
Current techniques for multi-pose human face matching yield suboptimal outcomes because of the intricate nature of pose equalization and face rotation. Deep learning models, such as YOLO-V5, etc., that have been proposed to tackle these complexities, suffer from slow frame matching speeds and therefore exhibit low face recognition accuracy. Concerning this, certain literature investigated multi-pose human face detection systems; however, those studies are of elementary level and do not adequately analyze the utility of those systems. To fill this research gap, we propose a real-time face matching algorithm based on YOLO-V5. Our algorithm utilizes multi-pose human patterns and considers various face orientations, including organizational faces and left, right, top, and bottom alignments, to recognize multiple aspects of people. Using face poses, the algorithm identifies face positions in a dataset of images obtained from mixed pattern live streams, and compares faces with a specific piece of the face that has a relatively similar spectrum for matching with a given dataset. Once a match is found, the algorithm displays the face on Google Colab, collected during the learning phase with the Robo-flow key, and tracks it using the YOLO-V5 face monitor. Alignment variations are broken up into different positions, where each type of face is uniquely learned to have its own study demonstrated. This method offers several benefits for identifying and monitoring humans using their labeling tag as a pattern name, including high face-matching accuracy and minimum speed of owing face-to-pose variations. Furthermore, the algorithm addresses the face rotation issue by introducing a mixture of error functions for execution time, accuracy loss, frame-wise failure, and identity loss, attempting to guide the authenticity of the produced image frame. Experimental results confirm effectiveness of the algorithm in terms of improved accuracy and reduced delay in the face-matching paradigm.