학술논문

A Flexible Deep Learning Architecture for Temporal Sleep Stage Classification Using Accelerometry and Photoplethysmography
Document Type
Periodical
Source
IEEE Transactions on Biomedical Engineering IEEE Trans. Biomed. Eng. Biomedical Engineering, IEEE Transactions on. 70(1):228-237 Jan, 2023
Subject
Bioengineering
Computing and Processing
Components, Circuits, Devices and Systems
Communication, Networking and Broadcast Technologies
Recording
Sleep apnea
Deep learning
Decoding
Rapid eye movement sleep
Feature extraction
Classification algorithms
MHealth
deep learning
wrist actigraphy
sleep stage classification
consumer sleep technologies
Language
ISSN
0018-9294
1558-2531
Abstract
Wrist-worn consumer sleep technologies (CST) that contain accelerometers (ACC) and photoplethysmography (PPG) are increasingly common and hold great potential to function as out-of-clinic (OOC) sleep monitoring systems. However, very few validation studies exist because raw data from CSTs are rarely made accessible for external use. We present a deep neural network (DNN) with a strong temporal core, inspired by U-Net, that can process multivariate time series inputs with different dimensionality to predict sleep stages (wake, light-, deep-, and REM sleep) using ACC and PPG signals from nocturnal recordings. The DNN was trained and tested on 3 internal datasets, comprising raw data both from clinical and wrist-worn devices from 301 recordings (PSG-PPG: 266, Wrist-worn PPG: 35). External validation was performed on a hold-out test dataset containing 35 recordings comprising only raw data from a wrist-worn CST. An accuracy = 0.71 ± 0.09, 0.76 ± 0.07, 0.73 ± 0.06, and κ = 0.58 ± 0.13, 0.64 ± 0.09, 0.59 ± 0.09 was achieved on the internal test sets. Our experiments show that spectral preprocessing yields superior performance when compared to surrogate-, feature-, raw data-based preparation. Combining both modalities produce the overall best performance, although PPG proved to be the most impactful and was the only modality capable of detecting REM sleep well. Including ACC improved model precision to wake and sleep metric estimation. Increasing input segment size improved performance consistently; the best performance was achieved using 1024 epochs (∼8.5 hrs.). An accuracy = 0.69 ± 0.13 and κ = 0.58 ± 0.18 was achieved on the hold-out test dataset, proving the generalizability and robustness of our approach to raw data collected with a wrist-worn CST.