International Research Journal of Engineering and Technology (IRJET)
e-ISSN: 2395-0056
Volume: 11 Issue: 05 | May 2024
p-ISSN: 2395-0072
www.irjet.net
Sign Language Recognition System Chaithrashree. A 1, Mritunjay Shukla 2, Rohit Kumar Thakur 3, Shagun Dhurandhar 4, Sudipto Chakraborty 5 1 Assistant Professor, Department of CSE, Brindavan College of Engineering, Bengaluru, India 2345 B.E. Student, Department of CSE, Brindavan College of Engineering, Bengaluru, India
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - Sign language is used by the deaf community to
movements. It is easy to identify a static hand, skin tone, and hand shape using characteristics such as finger directions, fingertips, and hand posture. The picture's background and lighting make these aspects not always reliable and accessible. It is hard to express features exactly as they are, hence the entire video frame or image is used as the input. The goal of this work is to examine and evaluate the methodologies employed in previous studies and approaches. This report also aims to determine and recommend the optimal research direction for future research.
communicate with non-deaf individuals. The gestures used in sign language can be difficult for the general population to understand. However, this sign language can be converted into a form that is easy for the general public to understand. This research is based on various image and video capturing techniques, preprocessing, classifying of hand gestures and landmarks extraction, and classification techniques. To identify the most promising methods for future research, this paper examines the techniques used to generate datasets, sign language recognition systems, and classification algorithms. Many of the currently available studies contribute to classification approaches in combination with deep learning due to the growth of classification methods. This paper focuses on the methods and techniques used or applied in earlier years.
According to Prof. T. Hikmet Karakoc, SL is comparable to the word language in that both are widely used around the world [2]. Since sign language has evolved, its vocabulary and syntax have been recognized as real languages. The reason sign language (SL) is the language of choice for the deaf is that it can be constructed by combining facial emotions, hand gestures and movements, and palm movements to convey a speaker's thoughts without the need for a voice.
Key Words: SLRS- Sign language Recognition System, CNN- Convolutional Neural Networks, LSTM- Long ShortTerm Memory, SL - Sign Language, NN – Neural Network, LCBCr- Luminance, Chrominance Blue, Chrominance Redcolor.
Deep learning can be very effective in sign language recognition and translation tasks. Sign translation is the process of converting signed language into spoken language or text, whereas sign recognition is the analysis and interpretation of sign language gestures. Deep Learning methodologies that can be used to train models that can understand and translate sign language include convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory (LSTM) networks. One popular method is using a camera to record the gestures and then applying computer vision techniques to process the video frames to extract characteristics that can be input into a deep learning model for translation or classification.
1. INTRODUCTION The goal of a sign language recognition system is to identify and translate hand gestures, body language, and other nonverbal cues utilized in sign language. This paper discusses a wide range of algorithms and techniques that can be used to collect, process, and comprehend hand gestures and sign language used by the deaf. It is claimed that the system for hand gesture recognition is a method or a strategy for productive human-computer interaction. Sign languages are those that express meaning through the visual-manual modality [1]. Static sign identification from photos or videos recorded in somewhat controlled environments has been the focus of a lot of recent research.
Because deep learning can automatically discover intricate patterns and relationships in large datasets, it is a good fit for sign language recognition applications. Sign language is a visual language that requires intricate hand, facial, and body motions. By examining a vast quantity of sign language data, including videos of people speaking the language, deep learning models may be trained to identify these gestures and patterns. This process allows the models to automatically identify the key elements and patterns needed for precise sign language identification.
The foundation of computer vision techniques is the way humans interpret information about their surroundings. Since sign language movements might vary, it can be challenging to create a vision-based interface that is understood by people worldwide. However, developing such an interface for a specific set of people or nations is conceivable. The detection of hand gestures and their motion when in motion depends on area selection since hand motions have distinct shape variations, textures, and
© 2024, IRJET
|
Impact Factor value: 8.226
|
ISO 9001:2008 Certified Journal
|
Page 1489