학술논문

Mutual assistance between speech and vision for human-robot interaction
Document Type
Conference
Source
2008 IEEE/RSJ International Conference on Intelligent Robots and Systems Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on. :4011-4016 Sep, 2008
Subject
Robotics and Control Systems
Computing and Processing
Robots
Speech recognition
Speech
Robot kinematics
Hidden Markov models
Target tracking
Three dimensional displays
multiple object tracking
speech understanding
multimodal interaction
assistance robotic
Language
ISSN
2153-0858
2153-0866
Abstract
Among the cognitive abilities a robot companion must be endowed with, human perception and speech understanding are both fundamental in the context of multimodal human-robot interaction. First, we propose a multiple object visual tracker which is interactively distributed and dedicated to two-handed gestures and head location in 3D. An on-board speech understanding system is also developed in order to process deictic and anaphoric utterances. Characteristics and performances for each of the two components are presented. Finally, integration and experiments on a robot companion highlight the relevance and complementarity of our multimodal interface. Outlook to future work is finally discussed.