학술논문

A Comprehensive Multisensor Dataset Employing RGBD Camera, Inertial Sensor and Web Camera
Document Type
Conference
Source
2019 20th Asia-Pacific Network Operations and Management Symposium (APNOMS) Network Operations and Management Symposium (APNOMS), 2019 20th Asia-Pacific. :1-4 Sep, 2019
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Cameras
Acceleration
Gyroscopes
Estimation
Skeleton
Synchronization
Three-dimensional displays
Wearable sensors
Kinect
web camera
fitness
dataset
Language
Abstract
Over the decades, fitness activities and extreme endurance events are expanding throughout the world. The number of available public skeletal repositories and recognition/evaluation benchmarks has grown rapidly since Microsoft manufactured a motion sensing device called Kinect. Kinect RGBD data has become a very useful representation of an indoor scene for solving activity/fitness recognition problems. The other alternative sensor which has been utilized widely in this area is the wearable inertial measurement unit (IMU) sensor. With numerous advance sensors with mass adoption, this technology represents a possible approach to surpass current activity recognition and evaluation research solutions. Nevertheless, there is a limited number of publicly available datasets where depth camera, inertial sensor, and RGB image data are captured at the same time. In this paper, we introduce NCTU-MFD (National Chiao Tung University Multisensor Fitness Dataset), a comprehensive, diverse multisensor dataset collected using Kinect RGBD sensor, wearable inertial sensors, and web cameras. The dataset contains 47131 RGB images, 47131 depth images, and 100 csv files including 47131 skeletal data (from 25 joints) collected from Kinect sensor. In addition, our dataset also contains acceleration and gyroscope data from IMU sensors, and 94262 RGB images (47131 images from each web camera). To demonstrate the possible use of our dataset, we conduct an experiment on evaluation of depth maps.