학술논문

Multimodal information fusion for human-robot interaction
Document Type
Conference
Source
2015 IEEE 10th Jubilee International Symposium on Applied Computational Intelligence and Informatics Applied Computational Intelligence and Informatics (SACI), 2015 IEEE 10th Jubilee International Symposium on. :535-540 May, 2015
Subject
Bioengineering
Communication, Networking and Broadcast Technologies
Computing and Processing
Engineering Profession
Robotics and Control Systems
Support vector machines
Image color analysis
Face recognition
Face
Service robots
Emotion recognition
Hand Gesture Recognition
Emotion Recognition
Human-Robot Interaction(HRI)
Language
Abstract
In this paper we introduce a multimodal information fusion for human-robot interaction system These multimodal information consists of combining methods for hand sign recognition and emotion recognition of multiple. These different recognition modalities are an essential way for Human-Robot Interaction (HRI). Sign language is the most intuitive and direct way to communication for impaired or disabled people. Through the hand or body gestures, the disabled can easily let caregiver or robot know what message they want to convey. Emotional interaction with human beings is desirable for robots. In this study, we propose an integrated system which has ability to track multiple people at the same time, to recognize their facial expressions, and to identify social atmosphere. Consequently, robots can easily recognize facial expression, emotion variations of different people, and can respond properly. In this paper, we have developed algorithms to determine the hands sign via a process called combinatorial approach recognizer equation. These two recognizers are aimed to complement the ability of discrimination. In our facial expression recognition scheme, we fuse feature vectors based approach and differential-active appearance model feature based approach to obtain not only apposite positions of feature points, but also more information about texture and appearance. We have successfully demonstrated hand gesture recognition and emotion recognition experimentally with proof of concept.