학술논문

Word-level Sign Language Recognition Using Linguistic Adaptation of 77 GHz FMCW Radar Data
Document Type
Conference
Source
2021 IEEE Radar Conference (RadarConf21) Radar Conference (RadarConf21), 2021 IEEE. :1-6 May, 2021
Subject
Aerospace
General Topics for Engineers
Signal Processing and Analysis
Training
Radio frequency
Assistive technology
Neural networks
Kinematics
Gesture recognition
Radar
ASL
sign language
gesture recognition
RF sensing
radar
micro-Doppler
deep learning
Language
ISSN
2375-5318
Abstract
Over the years, there has been much research in both wearable and video-based American Sign Language (ASL) recognition systems. However, the restrictive and invasive nature of these sensing modalities remains a significant disadvantage in the context of Deaf-centric smart environments or devices that are responsive to ASL. This paper investigates the efficacy of RF sensors for word-level ASL recognition in support of human-computer interfaces designed for deaf or hard-of-hearing individuals. A principal challenge is the training of deep neural networks given the difficulty in acquiring native ASL signing data. In this paper, adversarial domain adaptation is exploited to bridge the physical/kinematic differences between the copysigning of hearing individuals (repetition of sign motion after viewing a video), and native signing of Deaf individuals who are fluent in sign language. Domain adaptation results are compared with those attained by directly synthesizing ASL signs using generative adversarial networks (GANs). Kinematic improvements to the GAN architecture, such as the insertion of micro-Doppler signature envelopes in a secondary branch of the GAN, are utilized to boost performance. Word-level classification accuracy of 91.3% is achieved for 20 ASL words.