학술논문

EdgeActNet: Edge Intelligence-Enabled Human Activity Recognition Using Radar Point Cloud
Document Type
Periodical
Source
IEEE Transactions on Mobile Computing IEEE Trans. on Mobile Comput. Mobile Computing, IEEE Transactions on. 23(5):5479-5493 May, 2024
Subject
Computing and Processing
Communication, Networking and Broadcast Technologies
Signal Processing and Analysis
Point cloud compression
Human activity recognition
Radar
Sensors
Image edge detection
Computational modeling
Real-time systems
radar
point cloud
binary neural network
edge intelligence
Language
ISSN
1536-1233
1558-0660
2161-9875
Abstract
Human activity recognition (HAR) has become a research hotspot because of its wide range of application prospects. It has higher requirements for real-time and power-efficient processing. However, a large amount of data transfer between sensors and servers, and computation-intensive recognition models hinder the implementation of real-time HAR systems. Recently, edge computing has been proposed to address this challenge by moving computational and data storage resources to the sensors, rather than depending on a centralized server/cloud. In this paper, we investigated binary neural networks for edge intelligence-enabled HAR using radar point cloud. Point cloud can provide 3-dimensional spatial information, which is helpful to improve recognition accuracy. Time-series point cloud also brings challenges, such as larger data volume, 4-dimensional data processing, and more intensive computation. To tackle these challenges, we adopt the 2-dimensional histograms for point cloud multi-view processing and propose the EdgeActNet, a binary neural network for point cloud-based human activity classification on edge devices. In the evaluation, the EdgeActNet achieved the best results with average accuracies of 97.63% on the MMActivity dataset and 95.03% on the point cloud samples of the DGUHA dataset respectively; and saved $16.9\times$16.9× memory consumption and $11.5\times$11.5× inference time compared to its full-precision version. Our work also is the first to apply 2D histogram-based multi-view representation and BNNs for time-series point cloud classification.