Web application to learn sign language with deep learning

Deep learning and computer vision are used to create applications that facilitate a better interaction between humans and machines. In the educational domain, obtaining information about sign language is simple, but finding a platform that allows for intuitive interaction is quite challenging. A web...

Descrición completa

Gardado en:
Detalles Bibliográficos
Autor Principal: Jami Jami, Bryan Eduardo (author)
Formato: bachelorThesis
Idioma:eng
Publicado: 2023
Subjects:
Acceso en liña:http://repositorio.yachaytech.edu.ec/handle/123456789/676
Tags: Engadir etiqueta
Sen Etiquetas, Sexa o primeiro en etiquetar este rexistro!
Descripción
Summary:Deep learning and computer vision are used to create applications that facilitate a better interaction between humans and machines. In the educational domain, obtaining information about sign language is simple, but finding a platform that allows for intuitive interaction is quite challenging. A web app has been developed to address this issue by employing deep learning to assist users in learning sign language. In this study, two models for hand-gesture recognition were tested, utilizing 20,800 images; the models tested were AlexNet and GoogLeNet. The overfitting problem encountered in convolutional neural networks has been considered while training these models. Several techniques to minimize the overfitting and improve the overall accuracy have been employed in this study. AlexNet achieved an 87% of accuracy rate when interpreting hand gestures whereas GoogLeNet achieved an 85% accuracy rate. These results were incorporated into the web app, which aims to teach the alphabet of American sign language intuitively.