학술논문

YO-VIO: Robust Multi-Sensor Semantic Fusion Localization in Dynamic Indoor Environments
Document Type
Conference
Source
2021 International Conference on Indoor Positioning and Indoor Navigation (IPIN) Indoor Positioning and Indoor Navigation (IPIN), 2021 International Conference on. :1-6 Nov, 2021
Subject
Components, Circuits, Devices and Systems
Computing and Processing
General Topics for Engineers
Geoscience
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Visualization
Simultaneous localization and mapping
Three-dimensional displays
Service robots
Heuristic algorithms
Semantics
Object detection
indoor robot localization
visual-inertial odometry
dynamic environment
semantics
multi-sensor
Language
ISSN
2471-917X
Abstract
Visual Simultaneous Localization and Mapping (SLAM) is widely employed in modern mobile service robots, which help robots to capture images indoors for estimating their pose. However, as part of visual SLAM, the typical visual odometry (VO) and visual-inertial odometry (VIO) systems only work in static environments. In many scenarios, they have to work in high-dynamic environments, which brings challenges to previous visual SLAM. In this paper, we proposed a novel monocular VIO for the challenging dynamic environments. Our method can make the robot locate accurately and robustly. Based on VINS-Mono, our system constructs a dynamic objects and feature points detection module. This module combines semantic object detection, multi-sensor-aided, and the geometric 3D vision constraints to remove the dynamic feature points. According to our experiments, the results demonstrate our system outperforms SOTA monocular VIO systems in accuracy and robustness, especially in high-dynamic indoor environments.