Show simple item record

dc.contributor.advisorHayatunnufus, Hayatunnufus
dc.contributor.advisorLydia, Maya Silvi
dc.contributor.authorPurba, Helga Pricilla Br.
dc.date.accessioned2025-04-16T04:10:31Z
dc.date.available2025-04-16T04:10:31Z
dc.date.issued2025
dc.identifier.urihttps://repositori.usu.ac.id/handle/123456789/103139
dc.description.abstractThis research focuses on the integration of the MobileNetV2 deep learning model and Speech Recognition technology in a video call application to detect Indonesian Sign Language (BISINDO) gestures and convert speech-to-text in real-time. The main objective of this study is to facilitate more inclusive two-way communication between Deaf and non-Deaf users in the context of video calls. The ElCue application integrates both technologies, with MobileNetV2 used to detect the hand gestures of Deaf users and translate them into text, while Speech Recognition is used to transcribe the speech of non-Deaf users into text. The MobileNetV2 model, converted into TensorFlow Lite (TFLite) format, successfully detected BISINDO gestures with an average accuracy of 85.56%. Meanwhile, Speech Recognition technology, through the integration of the Speech-to-Text API, transcribed speech with an average accuracy of 93%. The use of Agora SDK for video calls ensures smooth audio and video communication, despite the real-time processing requirements of AI technologies. Testing results show that this integration successfully enhances communication accessibility between Deaf and non-Deaf users, although challenges remain regarding the limited computational power of mobile devices. This study makes a significant contribution to the development of more inclusive communication technologies for the Deaf community.en_US
dc.language.isoiden_US
dc.publisherUniversitas Sumatera Utaraen_US
dc.subjectMobileNetV2 Integrationen_US
dc.subjectSpeech Recognitionen_US
dc.subjectVideo Callen_US
dc.subjectIndonesian Sign Language (BISINDO)en_US
dc.subjectTensorFlow Liteen_US
dc.subjectAgora SDKen_US
dc.titleIntegrasi Model Deep Learning dengan Pre-trained MobileNetV2 dan Speech Recognition pada Aplikasi Video Call ElCue untuk Pendeteksian Bahasa Isyarat Indonesia (BISINDO)en_US
dc.title.alternativeIntegration of Deep Learning Model with Pre-trained MobileNetV2 and Speech Recognition in ElCue Video Call Application for Detection of Indonesian Sign Language (BISINDO)en_US
dc.typeThesisen_US
dc.identifier.nimNIM211401067
dc.identifier.nidnNIDN0019079202
dc.identifier.nidnNIDN0027017403
dc.identifier.kodeprodiKODEPRODI55201#Ilmu Komputer
dc.description.pages76 Pagesen_US
dc.description.typeSkripsi Sarjanaen_US
dc.subject.sdgsSDGs 10. Reduce Inequalitiesen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record