Verbal Communication is the only way using which people have interacted with each other over the years but the case
stands different for the disabled. The barrier created between the impaired and the normal people is one of the setbacks of the
society. For the impaired people (deaf & mute), sign language is the only way to communicate. In order to help the deaf and
mute communicate efficiently with the normal people, an effective solution has been devised. Our aim is to design a system
which analyses and recognizes various alphabets from a database of sign images. In order to accomplish this, the application
uses various techniques of Image Processing such as segmentation & feature extraction. We use the machine learning
technique, Convolutional Neural Network for detection of sign language