학술논문

A Triangulation-Based Visual Localization for Field Robots
Document Type
Periodical
Source
IEEE/CAA Journal of Automatica Sinica IEEE/CAA J. Autom. Sinica Automatica Sinica, IEEE/CAA Journal of. 9(6):1083-1086 Jun, 2022
Subject
Computing and Processing
Communication, Networking and Broadcast Technologies
General Topics for Engineers
Robotics and Control Systems
Language
ISSN
2329-9266
2329-9274
Abstract
Dear Editor, Visual localization relies on local features and searches a pre-stored GPS-tagged image database to retrieve the reference image with the highest similarity in feature spaces to predict the current location [1]–[3]. For the conventional methods [4]–[6], local features are generally explored by multiple-stage feature extraction which first detects and then describes key-point features [4], [7]. The multiple-stage feature extraction is redundantly implemented, which is not memory and run-time efficient. Its performance degrades with challenging conditions such as poor lighting and weather variations (as shown in Fig. 1(a)) because the multiple-stage design may lose information in the quantization step which produces inadequately key-point features for matching. Another critical issue for existing visual localization is that most of the conventional methods are one-directional-based approaches, which only use one-directional images (front images) to search and match GPS-tag references [4], [8]. With the increase of database size, one-directional inputs can be homogeneous which makes it difficult for the localization algorithms to work robustly (as shown in Fig. 1(b)).