학술논문

Wi-Fi and Radar Fusion for Head Movement Sensing Through Walls Leveraging Deep Learning
Document Type
Periodical
Source
IEEE Sensors Journal IEEE Sensors J. Sensors Journal, IEEE. 24(9):14952-14961 May, 2024
Subject
Signal Processing and Analysis
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Robotics and Control Systems
Magnetic heads
Sensors
Radar
Wireless fidelity
Deep learning
Radio frequency
Monitoring
Behavior monitoring
channel state information
deep learning (DL)
features fusion
machine learning (ML)
micro-Doppler signatures
radio frequency (RF) sensing
Language
ISSN
1530-437X
1558-1748
2379-9153
Abstract
The detection of head movement plays a crucial role in human–computer interaction systems. These systems depend on control signals to operate a range of assistive and augmented technologies, including wheelchairs for Quadriplegics, as well as virtual/augmented reality and assistive driving. Driver drowsiness detection and alert systems aided by head movement detection can prevent major accidents and save lives. Wearable devices, such as MagTrack consist of magnetic tags and magnetic eyeglasses clips and are intrusive. Vision-based systems suffer from ambient lighting, line of sight, and privacy issues. Contactless sensing has become an essential part of next-generation sensing and detection technologies. Wi-Fi and radar provide contactless sensing, however, in assistive driving they need to be inside enclosures or dashboards, which for all practical purposes in this article have been considered as through walls. In this study, we propose a contactless system to detect human head movement with and without walls. We used ultra-wideband (UWB) radar and Wi-Fi signals, leveraging machine and deep learning (DL) techniques. Our study analyzes the six common head gestures: right, left, up, and down movements. Time-frequency multiresolution analysis based on wavelet scalograms is used to obtain features from channel state information values, along with spectrograms from radar signals for head movement detection. Feature fusion of both radar and Wi-Fi signals is performed with state-of-the-art DL models. A high classification accuracy of 83.33% and 91.8% is achieved overall with the fusion of VGG16 and InceptionV3 model features trained on radar and Wi-Fi time–frequency maps with and without the walls, respectively.