中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/86297
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 42142743      Online Users : 1245
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/86297


    Title: 以少量視訊建構台灣手語詞分類模型;Using a Small Video Dataset to Construct a Taiwanese-Sign-Language Word Classification Model
    Authors: 翁浚銘;Wong, Jun-Ming
    Contributors: 軟體工程研究所
    Keywords: 台灣手語;手語識別;深度學習;Taiwanese sign language;sign language recognition;deep learning
    Date: 2021-08-04
    Issue Date: 2021-12-07 12:29:01 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 手語是一種視覺語言,利用手形、動作,甚至面部表情傳達訊息以作為聽障人
    士主要的溝通工具。以深度學習技術進行手語辨識在近年來受到矚目,然而神經網
    路訓練資料需仰賴大量手語視訊,其製作過程頗費時繁瑣。本研究提出利用單一手
    語視訊建構深度學習訓練資料的方法,實現在視訊畫面中辨識台灣手語詞彙。
    首先,我們由視訊共享平台中取得一系列手語教學視訊,透過Mask RCNN[1]
    找出所有教學畫面中的手部和面部分割遮罩,再透過空間域數據增強來創建更多不
    同內容的訓練集。我們也採用不同的時間域採樣策略,模擬不同手譯員的速度。最
    後我們以具注意力機制的3D-ResNet 對多種台灣手語辭彙進行分類,實驗結果顯
    示,我們所產生的合成資料集能在手語辭彙辨識上帶來幫助。;Sign languages (SL) are visual languages that use shapes of hands,
    movements, and even facial expressions to convey information, acting
    as the primary communication tool for hearing-impaired people. Sign
    language recognition (SLR) based on deep learning technologies has attracted
    much attention in recent years. Nevertheless, training neural
    networks requires a massive number of SL videos. Their preparation process
    is time-consuming and cumbersome. This research proposes using a
    set of SL videos to build effective training data for the classification of
    Taiwanese Sign Language (TSL) vocabulary. First, we begin with a series
    of TSL teaching videos from the video-sharing platform. Then, Mask
    RCNN[1] is employed to extract the segmentation masks of hands and
    faces in all video frames. Next, spatial domain data augmentation is applied
    to create the training set with different contents. Varying temporal
    domain sampling strategies are also employed to simulate the speeds of
    different signers. Finally, the attention-based 3D-ResNet trained by the
    synthetic dataset is used to classify a variety of TSL vocabulary. The
    experimental results show the promising performance and the feasibility
    to SLR.
    Appears in Collections:[Software Engineer] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML124View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明