English  |  正體中文  |  简体中文  |  Items with full text/Total items : 65421/65421 (100%)
Visitors : 22307473      Online Users : 168
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/72287


    Title: 基於長短期記憶深層學習方法之動作辨識
    Authors: 江金晉;Chiang,Chin-Chin
    Contributors: 資訊工程學系
    Keywords: 動作辨識;長短期記憶;深層學習;注意力模型;卷積神經網路;類神經網路;Action recognition;Long short-term memory;Deep learning;Attention model;Convolutional neural network;Neural network
    Date: 2016-08-29
    Issue Date: 2016-10-13 14:37:16 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 生活品質不斷提升、便捷性不斷增加的同時,多少功能與應用仰賴於背後的技術支援與開發。從影像到影片、從姿勢到動作,隨著技術與硬體的不斷進步,我們所需要、所面對的,是更上層樓的功能與效果。
    基於長短期記憶的深層學習架構,我們提出了光流注意力模型。該模型透過光流圖的使用進行影片中的動作辨識。在提出的架構中,各影片皆切為多個幀影像,每幀影像都透過CNN進行特徵擷取,並依時序將特徵輸入至光流注意力模型中。注意力模型主要由LSTM組成,其特色在於輸入資料會先透過處理過的光流注意力權重圖做為特徵的權重值以提高特徵中的重要部分。而調整後的特徵會再繼續輸入至LSTM,並產生該時序的辨識結果。
    本論文藉由光流圖作為權重值對影像的重要區域進行動態追蹤,以提高重要特徵所具有的權重。在動作辨識的實驗中,我們提出的光流注意力模型高於僅使用LSTM約3.6%,高於參考的視覺注意力模型約2.4%。而若與視覺注意力結合,則整體架構能高於僅使用LSTM約4.5%,高於只使用視覺注意力模型約3.3%。實驗結果顯示出以光流圖作為權重值能有效地捕捉影片動作中的具鑒別性區域,並能與視覺注意力作互補產生更好的辨識效果。
    ;In the meantime while the quality of life promotes continuously and the convenience increase constantly, so many uses and applications rely on the support of technology and exploitation behind. From image to video, and from gesture to action, what we need to face with the succeeding improvement of technology and hardware, is the much better function and effect.
    Based on the architecture of deep learning of long short-term memory, we proposed the optical flow attention model. This model do action recognition for videos through the use of optical flow images. In the proposed architecture, each video is separated to frame images, and feed into CNN for feature extraction. Each feature input into the optical flow model followed by the time sequence. The attention model is mainly composed by LSTM, and the characteristic of optical flow attention is that the input feature weighted by the optical flow weight image firstly to highlight the important part of current feature. And the adjusted feature input into LSTM after weighted and produce the recognition result at that time step.
    The thesis does dynamical tracing on the important area of image using optical flow image as weights to promote the weights at the important part of feature. In the experiment of action recognition, the optical flow image we proposed grows about 3.6% accuracy compared with the model only use LSTM, and get 2.4% higher compared with the visual attention model we referenced. And we combine the visual attention model with our optical flow attention model, getting 4.5% higher than LSTM and 3.6% higher than the visual attention model. The experiment result shows that using optical flow image as weights brings the effect to capture the discriminate area of action in video, and can complement with visual attention to reach better recognition effect.
    Appears in Collections:[資訊工程研究所] 博碩士論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML348View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback  - 隱私權政策聲明