English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 42118512      線上人數 : 923
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/84106


    題名: 使用虛擬合成資料實現臺灣手語特徵擷取暨手型辨識;Hand Feature Extraction and Gesture Recognition for Taiwan Sign Language by Using Synthetic Datasets
    作者: 陳宥榕;Chen, Yu-Jung
    貢獻者: 資訊工程學系
    關鍵詞: 虛擬合成資料;臺灣手語;特徵擷取;手 型 辨識;synthetic datasets;Taiwanese sign language;feature extraction;gesture recognition
    日期: 2020-08-10
    上傳時間: 2020-09-02 18:05:09 (UTC+8)
    出版者: 國立中央大學
    摘要: 本研究針對臺灣手語視訊進行手部特徵擷取暨手型辨識。首先,我們以 Unity3D 建立訓練資料集,利用 3D 手部模型合成於自然場景、人物場景及純色背景中,快速地且大量地產生高品質訓練資料,其中包含手部影像、手部輪廓、手部關節點。透過合成資料的使用,可以減少人工標記所可能產生的負擔與誤差。我們討論如何讓人工合成影像更貼近實際影像,藉由調整背景複雜度、膚色多樣性及加入動態模糊等方式產生多樣化影像以增加模型可靠度。接著,我們比較利用ResNeSt+Detectron2模型產生的邊界框(bounding box)和語義分割(semantic segmentation)、以及改良EfficientDet模型所產生之熱圖(heatmap)的完整性後,最終我們使用邊界框作為手型辨識的特徵擷取,利用邊界框切出手語視訊中的手部影像進行手型辨識。我們同樣以 Unity3D 建立訓練資料集,利用 3D 手部模型製作數個臺灣手語基本手型,再利用ResNeSt進行分類辨識。實驗結果顯示本研究所產生的大量且高品質虛擬合成資料能有效的應用於手部特徵擷取,及臺灣手語之手型辨識。;Hearing-impaired people rely on sign languages to communicate with each other but may have problems interacting with the persons who may not understand sign languages. Since sign languages belong to a type of visual languages, computer vision approaches to recognizing sign languages are usually considered feasible to bridge the gap. However, recognition of sign languages is a complex task, which requires classifying hand shapes, hand motions and facial expressions. The detection and classification of hand gestures should be the first step because hands are the most important elements. This research thus focuses on hand feature extraction and gesture recognition for Taiwan Sign Language (TSL) videos.
    First, we established a synthetic dataset by using Unity3D. The advantage of using synthetic data is to reduce the effort of manual labeling and to avoid possible errors. A large dataset with high quality labeling can thus be achieved. The dataset is generated by changing hand shapes, colors and orientations. The background images are also changed to increase the robustness of the model. Motion blurriness is also added to make the synthetic data look closers to real cases. We compare three feature extractions: bounding boxes, semantic segmentation generated by the ResNeSt+Detectron2 and the heatmap generated by the EfficientDet. The bounding boxes are selected for the subsequent gesture recognition. We also employ Unity3D to create several basic sign gestures for TSL, and then use ResNeSt for classification and recognition.
    Experimental results demonstrate that the synthetic dataset can effectively help to train the suitable models for hand feature extraction and gesture recognition in TSL videos.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML180檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明