학술논문

Audio Localization for Robots Using Parallel Cerebellar Models
Document Type
Periodical
Source
IEEE Robotics and Automation Letters IEEE Robot. Autom. Lett. Robotics and Automation Letters, IEEE. 3(4):3185-3192 Oct, 2018
Subject
Robotics and Control Systems
Computing and Processing
Components, Circuits, Devices and Systems
Robots
Adaptation models
Context modeling
Brain modeling
Acoustics
Predictive models
Calibration
Localization
learning and adaptive systems
robot audition
Language
ISSN
2377-3766
2377-3774
Abstract
A robot audio localization system is presented that combines the outputs of multiple adaptive filter models of the Cerebellum to calibrate a robot's audio map for various acoustic environments. The system is inspired by the MOdular Selection for Identification and Control (MOSAIC) framework. This study extends our previous work that used multiple cerebellar models to determine the acoustic environment in which a robot is operating. Here, the system selects a set of models and combines their outputs in proportion to the likelihood that each is responsible for calibrating the audio map as a robot moves between different acoustic environments or contexts. The system was able to select an appropriate set of models, achieving a performance better than that of a single model trained in all contexts, including novel contexts, as well as a baseline generalized cross correlation with phase transform sound source localization algorithm. The main contribution of this letter is the combination of multiple calibrators to allow a robot operating in the field to adapt to a range of different acoustic environments. The best performances were observed where the presence of a Responsibility Predictor was simulated.