Web application to learn sign language with deep learning

Deep learning and computer vision are used to create applications that facilitate a better interaction between humans and machines. In the educational domain, obtaining information about sign language is simple, but finding a platform that allows for intuitive interaction is quite challenging. A web...

全面介紹

Saved in:
書目詳細資料
主要作者: Jami Jami, Bryan Eduardo (author)
格式: bachelorThesis
語言:eng
出版: 2023
主題:
在線閱讀:http://repositorio.yachaytech.edu.ec/handle/123456789/676
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
_version_ 1863534787354951680
author Jami Jami, Bryan Eduardo
author_facet Jami Jami, Bryan Eduardo
author_role author
collection Repositorio Universidad Yachay Tech
dc.contributor.none.fl_str_mv Morocho Cayamcela, Manuel Eugenio
dc.creator.none.fl_str_mv Jami Jami, Bryan Eduardo
dc.date.none.fl_str_mv 2023-11-17T16:31:17Z
2023-11-17T16:31:17Z
2023-11
dc.format.none.fl_str_mv application/pdf
dc.identifier.none.fl_str_mv http://repositorio.yachaytech.edu.ec/handle/123456789/676
dc.language.none.fl_str_mv eng
dc.publisher.none.fl_str_mv Universidad de Investigación de Tecnología Experimental Yachay
dc.rights.none.fl_str_mv info:eu-repo/semantics/openAccess
dc.source.none.fl_str_mv reponame:Repositorio Universidad Yachay Tech
instname:Universidad Yachay Tech
instacron:Yachay
dc.subject.none.fl_str_mv Aprendizaje profundo
Visión computacional
Lenguaje de señas
Deep learning
Computer vision
Sign language
dc.title.none.fl_str_mv Web application to learn sign language with deep learning
dc.type.none.fl_str_mv info:eu-repo/semantics/publishedVersion
info:eu-repo/semantics/bachelorThesis
description Deep learning and computer vision are used to create applications that facilitate a better interaction between humans and machines. In the educational domain, obtaining information about sign language is simple, but finding a platform that allows for intuitive interaction is quite challenging. A web app has been developed to address this issue by employing deep learning to assist users in learning sign language. In this study, two models for hand-gesture recognition were tested, utilizing 20,800 images; the models tested were AlexNet and GoogLeNet. The overfitting problem encountered in convolutional neural networks has been considered while training these models. Several techniques to minimize the overfitting and improve the overall accuracy have been employed in this study. AlexNet achieved an 87% of accuracy rate when interpreting hand gestures whereas GoogLeNet achieved an 85% accuracy rate. These results were incorporated into the web app, which aims to teach the alphabet of American sign language intuitively.
eu_rights_str_mv openAccess
format bachelorThesis
id Yachay_2a83bc4c4b632e9e54b02b9cdebcde60
instacron_str Yachay
institution Yachay
instname_str Universidad Yachay Tech
language eng
network_acronym_str Yachay
network_name_str Repositorio Universidad Yachay Tech
oai_identifier_str oai:repositorio.yachaytech.edu.ec:123456789/676
publishDate 2023
publisher.none.fl_str_mv Universidad de Investigación de Tecnología Experimental Yachay
reponame_str Repositorio Universidad Yachay Tech
repository.mail.fl_str_mv .
repository.name.fl_str_mv Repositorio Universidad Yachay Tech - Universidad Yachay Tech
repository_id_str 10284
spelling Web application to learn sign language with deep learningJami Jami, Bryan EduardoAprendizaje profundoVisión computacionalLenguaje de señasDeep learningComputer visionSign languageDeep learning and computer vision are used to create applications that facilitate a better interaction between humans and machines. In the educational domain, obtaining information about sign language is simple, but finding a platform that allows for intuitive interaction is quite challenging. A web app has been developed to address this issue by employing deep learning to assist users in learning sign language. In this study, two models for hand-gesture recognition were tested, utilizing 20,800 images; the models tested were AlexNet and GoogLeNet. The overfitting problem encountered in convolutional neural networks has been considered while training these models. Several techniques to minimize the overfitting and improve the overall accuracy have been employed in this study. AlexNet achieved an 87% of accuracy rate when interpreting hand gestures whereas GoogLeNet achieved an 85% accuracy rate. These results were incorporated into the web app, which aims to teach the alphabet of American sign language intuitively.El aprendizaje profundo y la visión por computadora se utilizan para crear aplicaciones que faciliten una mejor interacción entre humanos y máquinas. En el ´ámbito educativo, obtener información sobre el lenguaje de señas es sencillo, pero encontrar una plataforma que permita una interacción intuitiva es todo un desafío. Se ha desarrollado una aplicación web para abordar este problema mediante el empleo de aprendizaje profundo para ayudar a los usuarios a aprender el lenguaje de señas. En este estudio, se probaron dos modelos de reconocimiento de gestos con las manos, utilizando 20.800 imágenes; Los modelos probados fueron AlexNet y GoogLeNet. Durante el entrenamiento de estos modelos se ha considerado el problema de sobreajuste que se encuentra en las redes neuronales convolucionales. En este estudio se han empleado varias técnicas para minimizar el sobreajuste y mejorar la precisión general. AlexNet logró una tasa de precisión del 87% al interpretar gestos con las manos, mientras que GoogLeNet logró una tasa de precisión del 85%. Estos resultados se incorporaron a la aplicación web, cuyo objetivo es enseñar el alfabeto de la lengua de signos estadounidense de forma intuitiva.Ingeniero/a en Tecnologías de la InformaciónUniversidad de Investigación de Tecnología Experimental YachayMorocho Cayamcela, Manuel Eugenio2023-11-17T16:31:17Z2023-11-17T16:31:17Z2023-11info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/bachelorThesisapplication/pdfhttp://repositorio.yachaytech.edu.ec/handle/123456789/676enginfo:eu-repo/semantics/openAccessreponame:Repositorio Universidad Yachay Techinstname:Universidad Yachay Techinstacron:Yachay2025-07-08T17:49:40Zoai:repositorio.yachaytech.edu.ec:123456789/676Institucionalhttps://repositorio.yachaytech.edu.ec/Universidad públicahttps://www.yachaytech.edu.ec/https://repositorio.yachaytech.edu.ec/oaiEcuador...opendoar:102842025-07-08T17:49:40falseInstitucionalhttps://repositorio.yachaytech.edu.ec/Universidad públicahttps://www.yachaytech.edu.ec/https://repositorio.yachaytech.edu.ec/oai.Ecuador...opendoar:102842025-07-08T17:49:40Repositorio Universidad Yachay Tech - Universidad Yachay Techfalse
spellingShingle Web application to learn sign language with deep learning
Jami Jami, Bryan Eduardo
Aprendizaje profundo
Visión computacional
Lenguaje de señas
Deep learning
Computer vision
Sign language
status_str publishedVersion
title Web application to learn sign language with deep learning
title_full Web application to learn sign language with deep learning
title_fullStr Web application to learn sign language with deep learning
title_full_unstemmed Web application to learn sign language with deep learning
title_short Web application to learn sign language with deep learning
title_sort Web application to learn sign language with deep learning
topic Aprendizaje profundo
Visión computacional
Lenguaje de señas
Deep learning
Computer vision
Sign language
url http://repositorio.yachaytech.edu.ec/handle/123456789/676