• Login
    View Item 
    •   USU-IR Home
    • Faculty of Computer Science and Information Technology
    • Department of Computer Science
    • Undergraduate Theses
    • View Item
    •   USU-IR Home
    • Faculty of Computer Science and Information Technology
    • Department of Computer Science
    • Undergraduate Theses
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Integrasi Model Deep Learning dengan Pre-trained MobileNetV2 dan Speech Recognition pada Aplikasi Video Call ElCue untuk Pendeteksian Bahasa Isyarat Indonesia (BISINDO)

    Integration of Deep Learning Model with Pre-trained MobileNetV2 and Speech Recognition in ElCue Video Call Application for Detection of Indonesian Sign Language (BISINDO)

    Thumbnail
    View/Open
    Cover (1.720Mb)
    Fulltext (4.413Mb)
    Date
    2025
    Author
    Purba, Helga Pricilla Br.
    Advisor(s)
    Hayatunnufus, Hayatunnufus
    Lydia, Maya Silvi
    Metadata
    Show full item record
    Abstract
    This research focuses on the integration of the MobileNetV2 deep learning model and Speech Recognition technology in a video call application to detect Indonesian Sign Language (BISINDO) gestures and convert speech-to-text in real-time. The main objective of this study is to facilitate more inclusive two-way communication between Deaf and non-Deaf users in the context of video calls. The ElCue application integrates both technologies, with MobileNetV2 used to detect the hand gestures of Deaf users and translate them into text, while Speech Recognition is used to transcribe the speech of non-Deaf users into text. The MobileNetV2 model, converted into TensorFlow Lite (TFLite) format, successfully detected BISINDO gestures with an average accuracy of 85.56%. Meanwhile, Speech Recognition technology, through the integration of the Speech-to-Text API, transcribed speech with an average accuracy of 93%. The use of Agora SDK for video calls ensures smooth audio and video communication, despite the real-time processing requirements of AI technologies. Testing results show that this integration successfully enhances communication accessibility between Deaf and non-Deaf users, although challenges remain regarding the limited computational power of mobile devices. This study makes a significant contribution to the development of more inclusive communication technologies for the Deaf community.
    URI
    https://repositori.usu.ac.id/handle/123456789/103139
    Collections
    • Undergraduate Theses [1180]

    Repositori Institusi Universitas Sumatera Utara (RI-USU)
    Universitas Sumatera Utara | Perpustakaan | Resource Guide | Katalog Perpustakaan
    DSpace software copyright © 2002-2016  DuraSpace
    Contact Us | Send Feedback
    Theme by 
    Atmire NV
     

     

    Browse

    All of USU-IRCommunities & CollectionsBy Issue DateTitlesAuthorsAdvisorsKeywordsTypesBy Submit DateThis CollectionBy Issue DateTitlesAuthorsAdvisorsKeywordsTypesBy Submit Date

    My Account

    LoginRegister

    Repositori Institusi Universitas Sumatera Utara (RI-USU)
    Universitas Sumatera Utara | Perpustakaan | Resource Guide | Katalog Perpustakaan
    DSpace software copyright © 2002-2016  DuraSpace
    Contact Us | Send Feedback
    Theme by 
    Atmire NV