博碩士論文 93542001 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator李俊傑zh_TW
DC.creatorChun-chieh Leeen_US
dc.date.accessioned2013-7-15T07:39:07Z
dc.date.available2013-7-15T07:39:07Z
dc.date.issued2013
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=93542001
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract幾十年來,利用各種影像或視訊來進行於身分辨識的應用技術,有相當多的研究單位關注與投入。其中,行人步態被認為是一種很有潛力的特徵,它能在較遠距離或低解析度的視訊中來有效識別身分。在這篇論文中,我們提出了行人身分辨識的架構,此架構同樣是擷取行人步行時所呈現的特徵,但它能進一步處理當測試者與資料庫裡的人被拍攝的角度可能是不一樣的情形。在我們的架構中,原始步態特徵是一個從行走視訊所提取的時空模板向量。這些特徵向量先投影到相對應的子空間,此子空間與拍攝角度有關。然後,這些位於相同子空間的特徵向量,被用來學習屬於該子空間裡特有的度量尺度。 在測試者與資料庫的拍攝角度是相同的情形下,用事先在該子空間所學習得來的度量尺度來計算相似程度。另外,在測試者與資料庫的拍攝角度是不一樣的情形下,我們事先建構好視角轉換模型(VTM)。在辨識測試者的身分時,假設資料庫的拍攝角度為 j,測試者的拍攝角度為 i,先將測試者的子空間特徵向量,轉換投影到資料庫的拍攝角度 j 所對應的子空間裡。然後,測試者與資料庫裡行人間的相似程度,是用資料庫對應子空間所學習得來的度量尺度來計算。我們用公開的標竿步態資料庫進行多個實驗,實驗結果顯示,結合特徵轉換與度量尺度學習的技術,對行人身分識別率的提升有顯著的效果。zh_TW
dc.description.abstractHuman identification using various visual cues has gained many research attentions for decades. Among these, gait feature has been considered as a promising way to recognize individual at a distance or at low resolution. In this dissertation, we propose the human recognition framework based on the biometric trait conveyed by a walking subject, where the viewing angles of gallery and probe may differ. The initial gait feature used in our framework is the spatial temporal template extracted from one walking sequence. These feature vectors are projected into the corresponding subspace respective to the capturing angle for the walking subject. Then, the embedding feature vectors, which are viewing angle dependent, are used to learn a distance metric. In identical view gait recognition, the metric learned from the embedded vectors of the same view is employed for similarity measurement between the probe and gallery. In cross view gait recognition, where the viewing angles of the probe and gallery are different, a view transformation model (VTM) is constructed by learning scheme in advance. At the recognition stage, assuming the gallery set is collected at viewing angle j, the embedded vector of the probe captured at another viewing angle i will be firstly transformed into the subspace spanned by gallery embedded vectors. Then, the similarities between the probe and gallery are measured using the metric learned on the subspace corresponding to viewing angle j. Experiments were conducted on public benchmark database and the results demonstrate that notable improvement of gait recognition performance via the combination of feature transformation and metric learning is accomplished as anticipated.en_US
DC.subject步態辨識zh_TW
DC.subject視角轉換模型zh_TW
DC.subject度量尺度學習zh_TW
DC.subjectGait Recognitionen_US
DC.subjectView Transformation Modelen_US
DC.subjectMetric Learningen_US
DC.title特徵轉換結合度量尺度學習與其在步態辨識上之應用zh_TW
dc.language.isozh-TWzh-TW
DC.titleFeature Transformation Coupled with Metric Learning with Application to Gait Recognitionen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明