Sing Language Recognition - ASL Recognition with MediaPipe and Recurrent Neural Networks
Títol de la revista
ISSN de la revista
Títol del volum
Autors
Correu electrònic de l'autor

Tutor / director
Tribunal avaluador
Realitzat a/amb
Tipus de document
Data
Condicions d'accés
item.page.rightslicense
Publicacions relacionades
Datasets relacionats
Projecte CCD
Abstract
The recognition of Sign language has been a challenge for more than twenty years, and in the last decade, some solutions, like translating gloves or complex systems with several cameras have been able to accomplish partial or full recognition.
Contrary to previous technologies, this research proves that, nowadays, there is not a need for complex and expensive hardware in order to recognize Sign Language, only a modern mobile phone or a computer camera is required. This is accomplished by using Google’s MediaPipe framework developed in 2019 and recurrent neural networks (RNN).
Therefore, this paper proofs it is possible to recognize four different gestures (hello, no, sign and understand) with an accuracy of 92%, in real time, and with a mobile phone or computer camera.