International Research Journal of Engineering and Technology (IRJET)
e-ISSN: 2395-0056
Volume: 09 Issue: 05 | May 2022
p-ISSN: 2395-0072
www.irjet.net
Sign Detection from Hearing Impaired Manu E S L1, Hariprasad E2, Mohammed Fahd N3, Premalatha K4 123Department
of Electrical and Engineering, Kumaraguru College of Technology[autonomous], Coimbatore, India 4Associate Professor, Department of Electrical and Engineering, Kumaraguru College of Technology[autonomous], Coimbatore, India ---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - Sign language is the mode of communication
hands while gesturing. Analyzing the angles of fingers in each gesture, edge detection and threshold to the image should be done for appropriate results. [5] With the help of the camera, the signs are recognized and classified according to each class. The hand color differs from person to person, the images with dark background are involved. [6] Using CNN instead of SVM could predict the results with 6-7% increased accuracy. The recognition is done by CNN model and assisted by a microprocessor to point to the specific user message or action.
among the speech and hearing-impaired community. It is hard for common people who are not familiar with sign language to communicate without an interpreter. This paper focuses on transcribing the gestures from sign language to a spoken language which is easily understood by the listening. Alphabets and words from static images are among the motions that have been translated This becomes more important for the people who completely rely on gestural sign language for communication when they try to communicate with a person who does not understand the sign language. Most of the systems that are currently developed can recognize the gestures based on the skin color. This paper focuses on creating a filter that can detect the gestural features of the hand irrespective of the color. The detection is possible with the advent of computer vision and AI technologies like ML and DL. The systems currently employed use Vector machine Algorithms (ML) which has less accurate results. This paper focuses on using CNN (DL) for training of the AI model.
2. PROPOSED WORK In this paper, we proposed a model that is built using Convolutional Neural Networks, that detects the sign given by the hearing-impaired people using OpenCv - a computer vision technology that helps in image processing and enables real-time recognition. We begin by preparing the dataset by taking pictures of the hand gestures and preprocessing them. Having 26 classes of signs, 250 images per class were taken that sum up to a total of 6500 images in the dataset. Much more images were generated by data augmentation like rotation, flipping. Shear range, Cropping etc. The CNN model was built using Keras-a deep-learning API that works over Tensorflow specially built for AI practices. Fig1 displays a high-level block diagram of the overall process. For the mathematical process in training, Numpy is used. The trained model is saved as a pickle file and tested against the test dataset, finally accuracy is obtained.
Key Words: Vector machine, Convolutional Neural Network (CNN), Deep learning, Computer Vision, Machine learning (ML)
1.INTRODUCTION People with hearing loss either by birth or any injury may not be able to talk. They converse with others via sign language. Usually sign language is not suitable to communicate with others. Only people in their close circle may be able to understand it. This paper focuses on providing an aid that helps hearing impaired people to communicate with the general public at ease.
The saved model is deployed on a microprocessor that does the real-Time analysis of images. The AI detects a specific class of the sign and sends digital input to a microcontroller like an arduino and a text-to-speech converter like E-speak that gives speech output specific to the class detected on a speaker. The arduino then makes the processes like calling a trusted person in case of emergency or sending location via SMS using GSM and GPS. Also, the detected sign has a set of words or sentences that are shown on a display on the gadget.
[1] Using computer Vision, gestures can be detected at 93.17% accuracy. Gestures can also be Translated using capacitive touch sensor with 92% accuracy, also with some disadvantages. [2] Multiclass SVM could also be used to detect as fast as 0.017s as it is used for classification of multiple gestures trained. [3] As common people cannot always use an interpreter everywhere; we believe this will be an easier means of communication. The gestures are captured and converted into text messages. [4] The hardware could be a wrist band that takes video of the
© 2022, IRJET
|
Impact Factor value: 7.529
|
ISO 9001:2008 Certified Journal
|
Page 3600