Hand Gesture Recognition System Using Holistic Mediapipe

Page 1

International Research Journal of Engineering and Technology (IRJET)

e-ISSN: 2395-0056

Volume: 09 Issue: 06 | June 2022

p-ISSN: 2395-0072

www.irjet.net

Hand Gesture Recognition System Using Holistic Mediapipe Ritesh Kate1, Pranav Brahmabhatt2, Prof. Swati Dhopte3, Prof. Tushar Mane4 1,2 Student,

Department of Computer Science Engineering MIT School of Engineering, Pune, Maharashtra, India Professor, Department of Computer Science Engineering MIT School of Engineering, Pune, Maharashtra, India ---------------------------------------------------------------------***--------------------------------------------------------------------1.2 What is Real Time Gesture Recognition? Abstract - Sign languages are visual languages produced by 3,4

the movement of the hands, face, and body. In this paper, we evaluate representations based on skeleton poses, as these are explainable, person-independent, privacy-preserving, lowdimensional representations. Basically, skeletal representations generalize over an individual’s appearance and background, allowing us to focus on the recognition of motion. But how much information is lost by the skeletal representation? We perform two independent studies using two state-of-the-art pose estimation systems. We analyze the applicability of the pose estimation systems to sign language recognition by evaluating the failure cases of the recognition models. Importantly, this allows us to characterize the current limitations of skeletal pose estimation approaches in sign language recognition.

Gesture Detection detection is an active research field in Human-Computer Interaction technology. It has many applications for visual environment control and sign language translation, robot control, or music creation. In this machine learning project on hand touch recognition, we will create a Real-Time Touch Viewer using the MediaPipe and Tensorflow framework in OpenCV and Python. many types of different tech and computer vision algorithms to translate the signal language. many types of tech and computer vision algorithms to translate sign language.

1.3 Sign Language Recognition System It is always difficult to communicate with someone who has a hearing problem. People with hearing and speech problems are now able to communicate their feelings and thoughts to the whole world in sign language, which has become an indelible solution. It simplifies and simplifies the process of integration between them and others. However, simply developing sign language is not enough. This blessing comes with many strings attached. For someone who has never read or known a different language, the movement of symbols is often mixed and confused. However, with the advent of more automated signaling methods, this long-standing communication gap can now be closed. We provide a sign language recognition program based on Acquisition of Personal Action in this paper. In this study, the user should be able to take hand-drawn pictures using a webcam and the system will predict and display the name of the captured image. The main goal of this project is to help the deaf, dumb, and blind community to communicate as an ordinary person using this project.

Key Words: Holistic Mediapipe, Sign language recognition [SLR], LSTM, Computer Vision, Deep Learning, RNN.

1. INTRODUCTION Deaf people communicate via hand signs, regular folks have a hard time understanding what they're saying. As a result, systems that recognize various signs and deliver information to ordinary people are required. To create a virtual tracking system for people in need, which will be accomplished through image processing and human action recognition. This is primarily for persons who are unable to communicate with others. Models consisting of several CNN layers followed by numerous LSTM layers are often used to predict Real-Time Sign Language. The precision of these cuttingedge models, on the other hand, is quite low. This approach, Mediapipe Holistic with LSTM Model, on the other hand, provides significantly higher accuracy. This method yielded better outcomes with a smaller amount of data. This model trained much faster because it used less parameters, resulting in a shorter calculation time.

1.1 Motivation and Background Sign Language Recognition strives to develop algorithms and methods for accurately identifying the sequences of symbols produced and understanding their meaning. Many SLR methods treat the problem as Gesture Recognition (GR). Therefore, research has so far focused on identifying the positive characteristics and methods of differentiation in order to properly label a given signal from a set of possible indicators. Sign language, however, is more than just a collection of well-articulated gestures.

© 2022, IRJET

|

Impact Factor value: 7.529

|

ISO 9001:2008 Certified Journal

|

Page 862


Turn static files into dynamic content formats.

Create a flipbook