학술논문

Development of a Real Time Vision-Based Hand Gesture Recognition System for Human-Computer Interaction
Document Type
Conference
Source
2023 IEEE 3rd Applied Signal Processing Conference (ASPCON) Applied Signal Processing Conference (ASPCON), 2023 IEEE 3rd. :294-299 Nov, 2023
Subject
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
General Topics for Engineers
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Software algorithms
Signal processing algorithms
Gesture recognition
Assistive technologies
Signal processing
Chatbots
Software
American Sign Language
BOT control
communication
hand gesture recognition
human-computer interaction
Language
Abstract
Speech impairment, a debility that affects an individual’s capacity to communicate via talking and hearing. Therefore, sign language is essential for facilitating communication with individuals who are deaf and mute. American Sign Language (ASL) is widespread in the application of sign language worldwide, with regional variations. Our proposed system is a vision-based hand gesture recognition software, where the camera and the software will observe the set of sign language or gestures and will convert it into normal text. So, with the help of this software, we can easily communicate with a mute person. In the proposed system we used ASL as the language dataset and YOLO-v5 as the main algorithm model. We also included a ‘BOT’ control with our gesture recognition system of hand for a better understanding of the power of the model and different application abilities of hand gesture recognition. Service bots have one-on-one conversations with users and discover "natural" and intuitive interfaces. The main highlighting point is we can use a custom dataset as per our flexibility, for example, we can move the bot forward by giving a sign like ‘1’ instead of sign language like ‘A’ or ‘B’. We captured nearly 700 pictures of sign language with different backgrounds and trained it where we got an overall 93% accuracy in the experiment.