中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/84078
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 78852/78852 (100%)
造訪人次 : 38253398      線上人數 : 716
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/84078


    題名: 基於人體骨骼圖的手語動作偵測與辨識;Skeleton based continuous sign language action detection and recognition
    作者: 張檳妮;Zhang, Binni
    貢獻者: 資訊工程學系
    關鍵詞: 動作偵測;動作分類;action detection;action recognition
    日期: 2020-07-29
    上傳時間: 2020-09-02 18:01:44 (UTC+8)
    出版者: 國立中央大學
    摘要: 深度學習技術應用於圖像識別的研究在最近十年間取得巨大的發展與進步,人們已經開始在社會生活中的各個領域内頻繁使用深度學習的方法自動分析視覺資料以獲取需要的資訊,並且這些嘗試大多取得了很好的效果。隨著計算機硬體計算能力的提升與計算方法的不斷優化,這類研究與嘗試已經從處理圖像 (image) 資料擴展到處理視訊影片 (video) 資料。
    聽覺障礙與語言功能障礙者在社會生活中往往有很多不便,尤其是在與非語言障礙人士溝通時。在深度學習與圖像識別發展的過程中,人們一直嘗試通過計算機幫助他們,使他們可以使用自己常用的手勢語言與非語言障礙人士自由地溝通。本論文即結合深度學習和視訊處理技術發展手語動作偵測與辨識研究,通過搭建深度學習網路來處理手語影像,並將其翻譯為文字資訊。為實現這一目的,我們搭建的網路大概可劃分爲三個步驟進行。
    第一為骨骼特徵擷取;在人體動作影像中往往有很多與動作無關的資訊;例如,背景、衣服或髮型等,爲了排除這些無關資訊對結果的影響,我們先使用Openpose模組從視訊資料中擷取出每一幀的人體骨骼圖,也就是只保留了與動作相關的資訊。之後對骨骼圖使用一個特殊的卷積網路“圖卷積網路”(graph convolutional network, GCN) 來擷取出動作特徵。第二為分割疑似動作片段;在獲得視訊每一幀的動作特徵後,我們使用小型的卷積網路尋找動作的起始與末端時間節點,結合時間節點内是否為動作的初步判斷,從整段影像中分割出一些疑似動作片段。第三為對這些疑似動作片段進行分類;並且通過非極大值抑制去除在時間上重疊的疑似片段,得到最終的動作偵測與辨識結果。
    在實驗中我們使用CSLR (Chinese Sign Language Recognition Dataset) 資料集,資料集為2D RGB手語影片,每個影片的長度在10秒以内,影片中錄製者面朝錄製設備。取15個連續手語語句,並對其中的31個單詞進行了分類。採用交替固定部分參數的方式訓練三個模組,完成網路訓練後我們與其他使用過相同資料集的手語識別網路的結果進行了比較,我們的句子精準度達到了84.5%。
    本文的特色主要有兩點;其一,使用人體骨骼關鍵點圖表示動作,並使用圖卷積提取特徵,過濾了與動作無關的背景特徵;其二,使用小型卷積網路偵測動作的開始與結束時間點,計算量較少且找到的動作長度靈活。;The application of deep learning technology to image recognition has made tremendous development and progress in the past decade. People have begun to frequently use deep learning methods in various fields of social life to automatically analyze visual data to obtain the required information, and these most of the attempts have achieved very good results. With the improvement of computer hardware computing capabilities and the continuous optimization of computing methods, such research and attempts have expanded from processing image data to processing video data.
    People with hearing impairment and language dysfunction often have a lot of inconveniences in social life, especially when communicating with people with non-verbal disabilities. In the process of deep learning and image recognition development, people have been trying to help them through computers so that they can use their commonly used gesture language to communicate freely with people with non-verbal disabilities. This thesis combines deep learning and video processing technology to develop sign language motion detection and recognition research, builds a deep learning network to process sign language images, and translates them into textual information. To achieve this goal, the network we built can be divided into three steps.
    First step, bone feature extraction; in human motion images, there is often a lot of information that is not related to movement; for example, background, clothing, or hairstyle, etc., in order to exclude the influence of these irrelevant information on the result, we first use the Openpose module to extract video data Each frame of the human skeleton is extracted from the frame, that is, only the information related to the movement is retained. Afterwards, a special convolutional network "graph convolutional network" (GCN) is used to extract the motion features for the skeletal graph. The second is to segment the suspected motion fragments; after obtaining the motion characteristics of each frame of the video, we use a small convolutional network to find the start and end time nodes of the motion, combined with the preliminary judgment of whether the motion is within the time node, from the whole Some suspicious motion clips are segmented from the video. The third is to classify these suspicious motion fragments; and remove the suspicious fragments overlapping in time by non-maximum suppression to obtain the final motion detection and recognition results.
    In the experiment, we use the CSLR (Chinese Sign Language Recognition Dataset) data set, which is a 2D RGB sign language videos’ data set. The length of each video is less than 10 seconds. The recorders in the videos face the recording device. We take 15 consecutive sign language sentences and 31 words of them to classify. The three modules were trained by alternately fixing some parameters. After completing the network training, we compared the results with other sign language recognition networks that used the same data set. Our sentence accuracy reached 84.5%.
    There are two main features of this article; one is the use of human Skeleton key points maps to represent actions, and the graph convolution is used to extract features, so that background features which are not related to actions are filtered; second, we use small convolutional networks to detect actions’ start and end time points, less calculation and flexible length of the action found.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML125檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明