학술논문

Sign Language Recognition Using Graph and General Deep Neural Network Based on Large Scale Dataset
Document Type
Periodical
Source
IEEE Access Access, IEEE. 12:34553-34569 2024
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Sign language
Feature extraction
Assistive technologies
Streams
Face recognition
Complexity theory
Pose estimation
Graph neural networks
Convolutional neural networks
Deep learning
Sign language recognition (SLR)
American sign language (ASL)
large scale dataset
hand pose
graph convolutional network (GCN)
graph convolutional with attention and residual connection (GCAR)
deep learning network
Language
ISSN
2169-3536
Abstract
Sign Language Recognition (SLR) represents a revolutionary technology aiming to establish communication between hearing impaired and non-hearing impaired communities, surpassing traditional interpreter-based approaches. Existing efforts in automatic sign recognition predominantly rely on hand skeleton joint information, steering clear of image pixels to address challenges like partial occlusion and redundant backgrounds. Many researchers have been working to develop automatic sign recognition using hand skeleton joint information instead of image pixels to overcome partial occlusion and redundant background problems. However, body motion and facial expression play an essential role in increasing the inner gesture variance in expressing sign language emotion besides hand information for large-scale sign word datasets. Recently, some researchers have been working to develop muti-gesture-based SLR recognition systems, but their performance accuracy and efficiency are unsatisfactory for real-time deployment. Addressing these limitations, we propose a novel approach, a two-stream multistage graph convolution with attention and residual connection (GCAR) designed to extract spatial-temporal contextual information. The multistage GCAR system, incorporating a channel attention module, dynamically enhances attention levels, particularly for non-connected skeleton points during specific events within spatial-temporal features. The methodology involves capturing joint skeleton points and motion, offering a comprehensive understanding of a person’s entire body movement during sign language gestures and feeding this information into two streams. In the first stream, joint key features undergo processing through sep-TCN, graph convolution, deep learning layer, and a channel attention module across multiple stages, generating intricate spatial-temporal features in sign language gestures. Simultaneously, the joint motion is processed in the second stream, mirroring the steps of the first branch. The fusion of these two features yields the final feature vector, which is then fed into the classification module. The model excels in capturing discriminative structural displacements and short-range dependencies by leveraging unified joint features projected onto a high-dimensional space. Owing to the effectiveness of these features, the proposed method achieved significant accuracies: 90.31%, 94.10%, 99.75%, and 34.41%, for the WLASL, PSL, MSL, and ASLLVD large-scale datasets, respectively, with 0.69 million parameters. The high-performance accuracy, coupled with stable computational complexity, demonstrates the superiority of the proposed model. This innovative approach is anticipated to redefine the landscape of sign language recognition, setting a new standard in the field.