Skip to main content

Real-time multilingual sing language Translator using python

Page 1

International Research Journal of Engineering and Technology (IRJET)

e-ISSN: 2395-0056

Volume: 11 Issue: 11 | Nov 2024

p-ISSN: 2395-0072

www.irjet.net

Real-time multilingual sing language Translator using python Manemoni Arun Kumar1, Penta Krishna Charan2, RM Noorullah3 1 B.Tech, Student, Dept. of CSE Institute of Aeronautical Engineering Hyderabad, India 2B.Tech, Student, Dept. of CSE Institute of Aeronautical Engineering Hyderabad, India

3Associate Professor, Dept. of CSE Institute of Aeronautical Engineering Hyderabad, India

---------------------------------------------------------------------***---------------------------------------------------------------------

Abstract - — In this respect, it presents considerable

it. Improvements in machine learning and artificial intelligence, through the advent of advanced technology, have opened the way for closing these gaps with the development of systems capable of interpreting sign language in real-time. This project will assist in giving better accessibility and inclusivity to sign language users. Most of the recent state-of-the-art solutions for sign language translation usually suffer from the number of languages supported, real-time processing, and accuracy of results. Remarkably though, most of these solutions have been tailored to one language that prohibits their usability in multilingual societies. This paper presents a real-time multilingual sign language translator to tackle the foregoing challenges. The proposed system would be helpful in recognizing sign language gestures and translating them into different languages execution. The multilingual approach can help this system be useful to a larger number of users, particularly part of multilingual regions. The system is to be developed using a combination of data collection techniques, machine learning models for gesture recognition, and translation algorithms. The data collection process involves capturing hand gesture images and its preprocessing for the enhancement of recognition accuracy. In the gesture recognition module, there is a trained CNN on these preprocessed images to classify various sign language gestures. Once a gesture is recognized, the translation module converts the recognized gesture into text and then translates this text into the target languages. It also provides inbuilt text-to-speech functionality to provide audio output, hence increasing the accessibility of the system for users. The contributions of the paper can be summarized primarily as follows: Real-time Gesture Recognition System: The system recognizes sign language gestures using machine learning techniques accurately in real-time. Multilingual Translation Support: It supports translation into multiple languages, facilitating multilingual communication. It is integrated with text-to-speech capabilities to make it further accessible by allowing users to listen to the translated text.

challenges in communication barriers between the deaf and hard-of-hearing and the hearing world. The paper thus proposes a real-time, multilingual sign language translation system developed in Python to bridge this gap. The system is designed to solve described needs for inclusive communication that capture sign language gestures through a webcam. These gestures are further processed to be translated into text using machine learning models trained on extensive sign language datasets. Afterwards, it is translated into several languages, making the system versatile in many different environments. Our methodology includes the most up-to-date computer vision techniques for safe detection and correct parsing of hand gestures, even under changing lighting conditions, which is key to realtime performance. Experimental validation is done by measuring gesture recognition accuracy, speed in translation, and multilingual functionality. Promising results demonstrate the system's efficacy in fast translation from sign language to spoken languages, including the possibility of understanding among people who do not share a common language and increasing accessibility. This research provides a great input into human– computer interaction by propagating a practical inclusivity tool. The real-time nature of the translation makes for smooth communication, while the multilingual feature attends to varied user needs. With its makeup of Python, machine learning, and computer vision, such a system could give place to a society that is much more connected and understanding toward one another, where sign language users could be very active in their day-today communications regardless of any spoken language. Key Words: Sign Language, Multilingual Translation, Machine Learning, Real-Time Processing, Gesture Recognition, , python ,machine-learning ,OpenCV-python, CNN-model ,media-pipehands, Translator

1.INTRODUCTION Sign language can be defined as the medium by which deaf and hard-of-hearing people are able to communicate. It represents messages through hand gestures, facial expressions, body movements, and, hence it is a very expressive medium. In spite of the importance of sign language in this community, an understanding of it by a greater part of the population remains very minimal, making it impossible for people who understand and use sign language to communicate with those who do not understand

© 2024, IRJET

|

Impact Factor value: 8.315

2.PROPOSED SOLUTION The real-time multilingual sign language translator system aims to bridge the communication gap for the hearingimpaired community by recognizing sign language gestures and translating them into multiple languages. The system leverages computer vision techniques and machine learning models to achieve high accuracy in gesture recognition and

|

ISO 9001:2008 Certified Journal

|

Page 396


Turn static files into dynamic content formats.

Create a flipbook
Real-time multilingual sing language Translator using python by IRJET Journal - Issuu