English  |  正體中文  |  简体中文  |  Items with full text/Total items : 68069/68069 (100%)
Visitors : 23109194      Online Users : 159
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/61073


    Title: 特徵轉換結合度量尺度學習與其在步態辨識上之應用;Feature Transformation Coupled with Metric Learning with Application to Gait Recognition
    Authors: 李俊傑;Lee,Chun-chieh
    Contributors: 資訊工程學系
    Keywords: 步態辨識;視角轉換模型;度量尺度學習;Gait Recognition;View Transformation Model;Metric Learning
    Date: 2013-07-15
    Issue Date: 2013-08-22 12:11:17 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 幾十年來,利用各種影像或視訊來進行於身分辨識的應用技術,有相當多的研究單位關注與投入。其中,行人步態被認為是一種很有潛力的特徵,它能在較遠距離或低解析度的視訊中來有效識別身分。在這篇論文中,我們提出了行人身分辨識的架構,此架構同樣是擷取行人步行時所呈現的特徵,但它能進一步處理當測試者與資料庫裡的人被拍攝的角度可能是不一樣的情形。在我們的架構中,原始步態特徵是一個從行走視訊所提取的時空模板向量。這些特徵向量先投影到相對應的子空間,此子空間與拍攝角度有關。然後,這些位於相同子空間的特徵向量,被用來學習屬於該子空間裡特有的度量尺度。
    在測試者與資料庫的拍攝角度是相同的情形下,用事先在該子空間所學習得來的度量尺度來計算相似程度。另外,在測試者與資料庫的拍攝角度是不一樣的情形下,我們事先建構好視角轉換模型(VTM)。在辨識測試者的身分時,假設資料庫的拍攝角度為 j,測試者的拍攝角度為 i,先將測試者的子空間特徵向量,轉換投影到資料庫的拍攝角度 j 所對應的子空間裡。然後,測試者與資料庫裡行人間的相似程度,是用資料庫對應子空間所學習得來的度量尺度來計算。我們用公開的標竿步態資料庫進行多個實驗,實驗結果顯示,結合特徵轉換與度量尺度學習的技術,對行人身分識別率的提升有顯著的效果。
    Human identification using various visual cues has gained many research attentions for decades. Among these, gait feature has been considered as a promising way to recognize individual at a distance or at low resolution. In this dissertation, we propose the human recognition framework based on the biometric trait conveyed by a walking subject, where the viewing angles of gallery and probe may differ. The initial gait feature used in our framework is the spatial temporal template extracted from one walking sequence. These feature vectors are projected into the corresponding subspace respective to the capturing angle for the walking subject. Then, the embedding feature vectors, which are viewing angle dependent, are used to learn a distance metric.
    In identical view gait recognition, the metric learned from the embedded vectors of the same view is employed for similarity measurement between the probe and gallery. In cross view gait recognition, where the viewing angles of the probe and gallery are different, a view transformation model (VTM) is constructed by learning scheme in advance. At the recognition stage, assuming the gallery set is collected at viewing angle j, the embedded vector of the probe captured at another viewing angle i will be firstly transformed into the subspace spanned by gallery embedded vectors. Then, the similarities between the probe and gallery are measured using the metric learned on the subspace corresponding to viewing angle j. Experiments were conducted on public benchmark database and the results demonstrate that notable improvement of gait recognition performance via the combination of feature transformation and metric learning is accomplished as anticipated.
    Appears in Collections:[資訊工程研究所] 博碩士論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML443View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback  - 隱私權政策聲明