Sign languages are languages that are use usual manual modality to convey meaning. Gestures are the form of nonverbal communication in which visible body action are used to communicate. This project is all about the system interface
developed that allows deaf and mutes to make use of various voices with the help of sign language. Majority of virtual assistants
work on the basis of audio input and gives audio output only but my project we will build a speak engine and on successful
identification of hand gesture we will trigger the speak engine. It make use of the concept of deep learning, CNN tensor flow
library etc. Finally the result will be in the form of text and we will use that text to speak out.