Sign languages are a way in which languages use the visual – manual means to convey a message to the other person.
It usually has a predefined sign with each symbol representing a letter.
Series of signs are used in order to convey a sentence. The main aim of this paper is to develop a mobile application-based
solution that takes sign language gestures as input to a trained deep learning model built using 2D Convolutional Neural
Networks and converts it to text and voice outputs in real-time for improved and finer communication.