학술논문

Performance-Aware NILM Model Optimization for Edge Deployment
Document Type
Periodical
Source
IEEE Transactions on Green Communications and Networking IEEE Trans. on Green Commun. Netw. Green Communications and Networking, IEEE Transactions on. 7(3):1434-1446 Sep, 2023
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
General Topics for Engineers
Computational modeling
Optimization
Performance evaluation
Data models
Quantization (signal)
Mathematical models
Hidden Markov models
Edge inference
non-intrusive load monitoring
quantization
pruning
optimization
resource management
green computing
Language
ISSN
2473-2400
Abstract
Non-Intrusive Load Monitoring (NILM) describes the extraction of the individual consumption pattern of a domestic appliance from the aggregated household consumption. Nowadays, the NILM research focus is shifted towards practical NILM applications, such as edge deployment, to accelerate the transition towards a greener energy future. NILM applications at the edge eliminate privacy concerns and data transmission-related problems. However, edge resource restrictions pose additional challenges to NILM. NILM approaches are usually not designed to run on edge devices with limited computational capacity, and therefore model optimization is required for better resource management. Recent works have started investigating NILM model optimization, but they utilize compression approaches arbitrarily without considering the trade-off between model performance and computational cost. In this work, we present a NILM model optimization framework for edge deployment. The proposed edge optimization engine optimizes a NILM model for edge deployment depending on the edge device’s limitations and includes a novel performance-aware algorithm to reduce the model’s computational complexity. We validate our methodology on three edge application scenarios for four domestic appliances and four model architectures. Experimental results demonstrate that the proposed optimization approach can lead up to a 36.3% average reduction of model computational complexity and a 75% reduction of storage requirements.