English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 78852/78852 (100%)
造訪人次 : 37997384      線上人數 : 810
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/89795


    題名: 以多特徵神經網路實現連續手語識別;Realizing Sign Language Recognition using Multi-Feature Neural Network
    作者: 費群安;Fitriajie, Arda Satata
    貢獻者: 資訊工程學系
    關鍵詞: 圖像處理;視頻處理;連續手語識別;手勢識別;關鍵點;Image Processing;Video Processing;Continuous Sign Language Recognition;Gesture Recognition;Keypoint
    日期: 2022-07-25
    上傳時間: 2022-10-04 12:00:02 (UTC+8)
    出版者: 國立中央大學
    摘要: 若有 RGB 視頻串流,我們的目標是正確識別與連續手語識別 (CSLR) 相關的手語。儘管
    該領域提出的深度學習方法逐漸增加,但大多數主要集中在僅使用 RGB 特徵,無論是全
    幀圖像還是手部和臉部的細節。 CSLR 訓練過程信息的不足嚴重限制了他們學習視頻輸入
    幀中多個特徵的能力。目前,多特徵網路變得相當普遍,因為當前的計算能力不再限制我
    們擴大網路規模。因此,在本文中,我們將研究深度學習網路並應用多特徵技術,以期增
    加和改進當前的連續手語識別任務,詳細說明我們將包括的另一個特徵在這項研究中,如
    果我們將它們做比較,關鍵點特徵沒有圖像特徵那麼沉重。這項研究的結果表明,在
    Phoenix2014 和中國手語這兩個最流行的 CSLR 數據集上,添加關鍵點特徵作為一種多特
    徵模態可以提高識別率,或者通常會降低單詞錯誤率 (WER)。
    ;Given the RGB video streams, we aim to recognize signs related to continuous sign
    language recognition (CSLR) correctly. Despite there are increasing of proposed deep learning
    methods in this area, most of them mainly focus on only using an RGB feature, either the full frame image or the detail of hands and face. The scarcity of information for the CSLR training
    process heavily constrains their capability to learn the multiple features within the video input
    frames. Currently, Multi-feature networks became something quite common since the current
    computing power is something that is not limiting us from scaling the network size anymore. Thus,
    in this thesis, we’re going to work deep learning network and apply a multi-feature technique with
    the hope to increase & improve the current state of the art of continuous sign language recognition
    tasks, in detail another feature that we would include in this research is the key-point feature which
    is not as heavy as the image feature if we are comparing them. The result of this research shows
    that adding a key-point feature as a multi-feature modality could increase the recognition rate or
    commonly, decrease the word error rate (WER) on the two most popular CSLR datasets:
    Phoernix2014 and Chinese Sign Language.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML70檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明