Please use this identifier to cite or link to this item:
http://10.1.7.192:80/jspui/handle/123456789/11332
Title: | American Sign Language(ASL) detection system using Convolutional Neural Networks(CNN) and Computer Vision |
Authors: | Shah, Siddhiben |
Keywords: | Computer 2020 Project Report 2020 Computer Project Report Project Report 20MCE 20MCEC 20MCEC15 |
Issue Date: | 1-Jun-2022 |
Publisher: | Institute of Technology |
Series/Report no.: | 20MCEC15; |
Abstract: | As communication is a very important part of our life, communication should be understandable and clear. In this world, according to World Health Organization(WHO), there are 432 million adults and 34 million children who have disabling hearing loss. So, using a deep learning model called Convolutional Neural Networks(CNN), computer vision technologies, and TensorFlow and Keras framework it is possible to detect and interpret hand gestures of American Sign Language(ASL). In this project there are approx 24K images of different gestures for conversation are used. Gaussian Filter is used to pre-process the data and as discussed, we have used Convolutional Neural Networks to train the images of hand gestures of different people according to American Sign Language(ASL). To interpret and incorporate the voice modules Google Text to Speech API is used. So, for interpret ASL language for deaf people, different libraries and deep learning techniques are very useful so that communication between disabled people becomes easy. In this paper we have achieved 97.7% accuracy by using Convolutional Neural Network(CNN) model. |
URI: | http://10.1.7.192:80/jspui/handle/123456789/11332 |
Appears in Collections: | Dissertation, CE |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
20MCEC15.pdf | 20MCEC15 | 1.45 MB | Adobe PDF | ![]() View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.