English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41635610      線上人數 : 1391
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/77663


    題名: 應用深度學習於結合自動偵測人物的步態辨識;Apply Deep Learning to Gait Recognition with Human Detection
    作者: 洪昭銘;Hung, Chao-Ming
    貢獻者: 資訊工程學系
    關鍵詞: 電腦視覺;機器學習;卷積神經網路;步態識別;深度學習;稠密光流;Computer Vision;Machine Learning;Convolutional Neural Network;Gait recognition;Deep Learning;Dense Optical Flow
    日期: 2018-07-26
    上傳時間: 2018-08-31 14:52:03 (UTC+8)
    出版者: 國立中央大學
    摘要: 本論文之研究主題是應用卷積神經網路於步態識別方法的實踐與神經網路模型的改進,旨在透過對RGB彩色影像序列進行人物位置的偵測,擷取人物之連續步行序列,同時訓練一個卷積神經網路以萃取該步行序列之步態特徵,將擷取的特徵用以識別人物。

    步態識別乃透過解析人們行走時各自不同的姿勢、習慣,包含骨架及關節運動等,來判斷目標的身分或身體狀況之非接觸式生物識別方法。此類方法最大的特點在於,它雖然可以使用穿戴式裝置捕捉精確的骨架,亦可在不需要對目標對象要求太多配合的狀況下進行。

    有別於基於偵測骨架的方法,本論文針對輸入序列以預訓練模型萃取其光流特徵以及切取目標人物位置作為具體的低階特徵輸入,本論文多採用全卷積神經網路的架構或預訓練模型(YOLOv2及FlowNet2.0),以克服輸入大小限制較多的問題。此後,訓練一個以拓寬殘差網路(Wide Residual Network, WRN)架構搭建的模型用來萃取高階的抽象特徵。並且本論文主要針對如何設計更高性能及效率的特徵萃取網路進行討論。

    通過FlowNet2.0得出的光流特徵圖能夠有效濾除背景資訊,本論文將專注在目標的動作上,避免擷取抽象特徵的網路學習到不必要的資訊(包含人物外表及背景)。在此之上本論文加入人物偵測的預訓練模型YOLOv2,是為了剪除過剩的輸入大小,並且自動化前處理階段本須進行的人工標記過程。而基於殘差網路(Residual Network, ResNet)概念所改良出的WRN結構具有比VGG-Like網路結構更好的解析力以及訓練效率,足以負擔這項工作。

    最後為了克服2D卷積網路難以取得區域時序特徵的問題,本論文提出了一個方法在網路中引入部分3D卷積結構,可以有效利用有限的記憶體資源設法取得更多有效的特徵,使得網路能夠發揮更佳的性能。
    ;The topic of this thesis is the implementation and improvement of convolutional neural networks applied to gait recognition methods. The purpose is training a convolutional neural network to extract the gait features of the human walking sequence, which preprocessed by detecting and cutting the ROI of person in the RGB image sequence. The extracted features extracted will use to identify people.

    Gait recognition is a non-contact biometric method to determine the identity or physical condition of people by analyzing the different postures and habits of people performing when they are walking, including skeletons and joint movements.

    Different from the method based on the detection skeleton, this paper uses the pre-training model(YOLOv2 and FlowNet2.0) to extract the optical flow feature maps and ROIs from the input sequence. Cutting and Concatenating the optical flow feature maps as the low-level feature input. Then, we will train a model built with the Wide Residual Network architecture to extract high-level abstract features from the low-level feature. And we mainly discuss how to design a feature extraction network with higher performance and efficiency.

    Extracting Optical flow feature maps by using FlowNet 2.0 can effectively filter out background information, to avoid model to learn unnecessary information (including people appearance and background). Furthermore, we added YOLOv2 as the people detector, pruning the excess input size and automating the manual marking process.

    In order to overcome the problem that 2D convolutional networks have difficulty in obtaining regional temporal feature, we have proposed a method to connect 3D/2D convolutional structures so that the network has better performance.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML95檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明