博碩士論文 103552004 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:13 、訪客IP:3.144.212.145
姓名 吳郁樟(Yu-Chang Wu)  查詢紙本館藏   畢業系所 資訊工程學系在職專班
論文名稱 一個基於行走姿勢特徵的步態辨識系統
(A Gait Recognition System Based on Walking Posture Features)
相關論文
★ 整合GRAFCET虛擬機器的智慧型控制器開發平台★ 分散式工業電子看板網路系統設計與實作
★ 設計與實作一個基於雙攝影機視覺系統的雙點觸控螢幕★ 智慧型機器人的嵌入式計算平台
★ 一個即時移動物偵測與追蹤的嵌入式系統★ 一個固態硬碟的多處理器架構與分散式控制演算法
★ 基於立體視覺手勢辨識的人機互動系統★ 整合仿生智慧行為控制的機器人系統晶片設計
★ 嵌入式無線影像感測網路的設計與實作★ 以雙核心處理器為基礎之車牌辨識系統
★ 基於立體視覺的連續三維手勢辨識★ 微型、超低功耗無線感測網路控制器設計與硬體實作
★ 串流影像之即時人臉偵測、追蹤與辨識─嵌入式系統設計★ 一個快速立體視覺系統的嵌入式硬體設計
★ 即時連續影像接合系統設計與實作★ 基於雙核心平台的嵌入式步態辨識系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 在多種生物辨識技術中,步態特徵能夠在中距離進行辨識,適用於街道監控系統,在常見的步態辨識方法中經常使用背景建模方法,此方法遇到複雜的動態背景,會使得前景提取困難,由於人體姿勢特徵不易受到動態背景的影響,因此本研究基於人體姿勢特徵,設計一個步態辨識系統,其中包含人體姿勢偵測、行人追蹤、步行週期偵測、建立步態特徵和步態辨識。人體姿勢偵測使用OpenPose,行人追蹤使用行人重識別方法,步行週期偵測使用人體腳踝偵測方法,步態辨識本研究結合人體姿勢與光流特徵,並提出人體骨架遮罩方法來去除部分的光流背景雜訊,在遮罩部分保留了骨架輪廓特徵,光流特徵則保留運動方向,最後使用CNN步態辨識方法與CASIA Dataset B步態資料庫進行訓練,步態辨識率為85.5,驗證此方法是有效的。
摘要(英) Among various types of biometrics, gait analysis is used for middle-distance gait recognition and is suitable for street surveillance system. A common gait recognition method is background subtraction. In this method, foreground extraction is difficult if the background is complex and dynamic. Because the features of human poses are not easily influenced by dynamic backgrounds, this study designed a gait recognition system based on human pose features. The system included functions of human pose detection, pedestrian tracking, and walking cycle detection and enabled the establishment of gait features and gait recognition. Human pose detection was performed using OpenPose; pedestrian tracking was conducted through person re-identification; and walking cycles were detected using the data of human ankles. For gait recognition, this study combined human poses and optical flow features, proposing a mask method for human skeletons to remove a part of the noises in the optical flow background. Features of skeleton outlines were retained from masks, and movement directions were retained using optical flow features. Finally, this study adopted convolutional neural network gait recognition method and the Dataset B of Chinese Academy of Sciences Gait Database for data training. The obtained gait recognition rate was 85.5%, confirming that the proposed method was effective.
關鍵字(中) ★ 步態辨識 關鍵字(英) ★ Gait Recognition
論文目次 摘要 i
Abstract ii
致謝 iii
目錄 iv
圖目錄 vii
表目錄 ix
第一章 緒論 1
1.1 研究背景 1
1.2 研究動機 2
1.3 論文架構 3
第二章 文獻回顧 4
2.1 經典步態特徵回顧 4
2.1.1 基於輪廓 4
2.1.2 基於3D模型 6
2.1.3 基於深度攝影機 7
2.2 經典步態分類方法回顧 8
2.2.1 基於最近鄰居 8
2.2.2 基於支持向量機 9
2.3 基於神經網路步態辨識 10
2.4 神經網路基礎理論 11
2.4.1 激勵函數 13
2.4.2 優化函數 15
2.5 卷積神經網路 16
2.6 神經網路訓練方法 17
2.6.1 開集合的Identification方法 17
2.6.2 開集合的Verification方法 18
2.6.3 閉集合的Identification方法 18
2.6.4 閉集合的Verification方法 19
2.7 人體姿勢演算法回顧 20
2.7.1 AlphaPose 20
2.7.2 DensePose 21
第三章 步態辨識系統設計 23
3.1 步態資料庫 23
3.2 步態辨識架構 23
3.3 建立人體姿勢 25
3.3.1 OpenPose 25
3.3.2 行人辨識 27
3.4 行人追蹤 27
3.5 步行週期偵測 30
3.6 建立步態特徵 30
3.6.1 行人圖像校正 30
3.6.2 人體姿勢與熱圖特徵 31
3.6.3 光流特徵 32
3.6.4 光流與人體遮罩特徵 33
3.7 CNN步態辨識 35
第四章 實驗結果 37
4.1 實驗方法 37
4.2 開發環境 37
4.3 實驗結果 38
4.3.1 驗證方法 38
4.3.2 步態特徵實驗 39
第五章 結論 44
5.1 結論 44
5.2 未來展望 45
參考文獻 46
參考文獻 [1] A. Mohan, K. Gauen, Y. H. Lu, W. W. Li, and X. Chen, "Internet of video things in 2030: A world with many cameras," in 2017 IEEE International Symposium on Circuits and Systems (ISCAS), 2017, pp. 1-4.
[2] X. Wang, "Intelligent multi-camera video surveillance: A review," Pattern recognition letters, vol. 34, no. 1, pp. 3-19, 2013.
[3] A. Jain, L. Hong, and S. Pankanti, "Biometric identification," Communications of the ACM, vol. 43, no. 2, pp. 90-98, 2000.
[4] A. K. Jain, A. Ross, and S. Prabhakar, "An introduction to biometric recognition," IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 4-20, 2004.
[5] J. Galbally, S. Marcel, and J. Fierrez, "Image Quality Assessment for Fake Biometric Detection: Application to Iris, Fingerprint, and Face Recognition," IEEE Transactions on Image Processing, vol. 23, no. 2, pp. 710-724, 2014.
[6] Y. Peng, L. Spreeuwers, and R. Veldhuis, "Designing a Low-Resolution Face Recognition System for Long-Range Surveillance," in 2016 International Conference of the Biometrics Special Interest Group (BIOSIG), 2016, pp. 1-5.
[7] R. Amandi, M. Bayat, K. Minakhani, H. Mirloo, and M. Bazarghan, "Long distance iris recognition," in 2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP), 2013, pp. 164-168.
[8] S. Sarkar, P. J. Phillips, Z. Liu, I. R. Vega, P. Grother, and K. W. Bowyer, "The humanID gait challenge problem: data sets, performance, and analysis," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 2, pp. 162-177, 2005.
[9] J. Zhang, J. Pu, C. Chen, and R. Fleischer, "Low-Resolution Gait Recognition," IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 40, no. 4, pp. 986-996, 2010.
[10] 內政部統計處. (2017). 106年第48週內政統計通報 [Online]. Available: https://www.moi.gov.tw/stat/news_detail.aspx?sn=13118
[11] Z. Wang, Z. Miao, Q. J. Wu, Y. Wan, and Z. Tang, "Low-resolution face recognition: a review," The Visual Computer, vol. 30, no. 4, pp. 359-386, 2014.
[12] W. Kim and C. Jung, "Illumination-invariant background subtraction: Comparative review, models, and prospects," IEEE Access, vol. 5, pp. 8369-8384, 2017.
[13] A. Sokolova and A. Konushin, "Gait recognition based on convolutional neural networks," The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 42, p. 207, 2017.
[14] Z. Wu, Y. Huang, L. Wang, X. Wang, and T. Tan, "A comprehensive study on cross-view gait based human identification with deep cnns," IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 2, pp. 209-226, 2017.
[15] Y. Guan and C.-T. Li, "A robust speed-invariant gait recognition system for walker and runner identification," in Biometrics (ICB), 2013 International Conference on, 2013, pp. 1-8: IEEE.
[16] O. Barnich and M. V. Droogenbroeck, "ViBe: A Universal Background Subtraction Algorithm for Video Sequences," IEEE Transactions on Image Processing, vol. 20, no. 6, pp. 1709-1724, 2011.
[17] K. Kim, T. H. Chalidabhongse, D. Harwood, and L. Davis, "Real-time foreground-background segmentation using codebook model," Real-Time Imaging, vol. 11, no. 3, pp. 172-185, 2005.
[18] Z. Zivkovic, "Improved adaptive Gaussian mixture model for background subtraction," in Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., 2004, vol. 2, pp. 28-31 Vol.2.
[19] C.-L. Hwang and H.-H. Huang, "Experimental validation of a car-like automated guided vehicle with trajectory tracking, obstacle avoidance, and target approach," in Industrial Electronics Society, IECON 2017-43rd Annual Conference of the IEEE, 2017, pp. 2858-2863: IEEE.
[20] J. Choi, D. Kim, H. Yoo, and K. Sohn, "Rear obstacle detection system based on depth from Kinect," in Intelligent Transportation Systems (ITSC), 2012 15th International IEEE Conference on, 2012, pp. 98-101: IEEE.
[21] A. Sokolova and A. Konushin, "Pose-based Deep Gait Recognition," arXiv preprint arXiv:1710.06512, 2017.
[22] J. Hu, L. Shen, and G. Sun, "Squeeze-and-excitation networks," arXiv preprint arXiv:1709.01507, 2017.
[23] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, "Large-Scale Video Classification with Convolutional Neural Networks," in 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1725-1732.
[24] M. Wang and W. Deng, "Deep Face Recognition: A Survey," arXiv preprint arXiv:1804.06655, 2018.
[25] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, "Labeled faces in the wild: A database for studying face recognition in unconstrained environments," Technical Report 07-49, University of Massachusetts, Amherst2007.
[26] C. Wan, L. Wang, and V. V. Phoha, "A survey on gait recognition," ACM Computing Surveys (CSUR), vol. 51, no. 5, p. 89, 2018.
[27] M. Ju and B. Bir, "Individual recognition using gait energy image," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 2, pp. 316-322, 2006.
[28] L. H. Juang, S. A. Lin, and M. N. Wu, "Gender Recognition Studying by Gait Energy Image Classification," in 2012 International Symposium on Computer, Consumer and Control, 2012, pp. 837-840.
[29] X. Hongye and H. Zhuoya, "Gait recognition based on gait energy image and linear discriminant analysis," in 2015 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), 2015, pp. 1-4.
[30] X. Huang and N. V. Boulgouris, "Gait Recognition With Shifted Energy Image and Structural Feature Extraction," IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 2256-2268, 2012.
[31] A. G. Binsaadoon and E. S. M. El-Alfy, "Kernel-Based Fuzzy Local Binary Pattern for Gait Recognition," in 2016 European Modelling Symposium (EMS), 2016, pp. 35-40.
[32] S. Sivapalan, D. Chen, S. Denman, S. Sridharan, and C. Fookes, "3D ellipsoid fitting for multi-view gait recognition," in 2011 8th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2011, pp. 355-360.
[33] T. Krzeszowski, A. Michalczuk, B. Kwolek, A. Switonski, and H. Josinski, "Gait recognition based on marker-less 3D motion capture," in 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance, 2013, pp. 232-237.
[34] C. Fengjiang, D. Muqing, and W. Cong, "Kinect-based gait recognition system design via deterministic learning," in 2017 29th Chinese Control And Decision Conference (CCDC), 2017, pp. 5916-5921.
[35] M. Li, J. Jiang, Y. Jia, and B. Lin, "A review of gait recognition based on vision," in 2016 5th International Conference on Computer Science and Network Technology (ICCSNT), 2016, pp. 823-827.
[36] C. C. Lee, C. H. Chuang, J. W. Hsieh, M. X. Wu, and K. C. Fan, "Frame difference history image for gait recognition," in 2011 International Conference on Machine Learning and Cybernetics, 2011, vol. 4, pp. 1785-1788.
[37] B. Ye and Y.-M. Wen, "Gait recognition based on DWT and SVM," in Wavelet Analysis and Pattern Recognition, 2007. ICWAPR′07. International Conference on, 2007, vol. 3, pp. 1382-1387: IEEE.
[38] F. M. Castro, M. J. Marin-Jimenez, N. Guil, S. Lopez-Tapia, and N. P. d. l. Blanca, "Evaluation of Cnn Architectures for Gait Recognition Based on Optical Flow Maps," in 2017 International Conference of the Biometrics Special Interest Group (BIOSIG), 2017, pp. 1-5.
[39] T. Wolf, M. Babaee, and G. Rigoll, "Multi-view gait recognition using 3D convolutional neural networks," in Image Processing (ICIP), 2016 IEEE International Conference on, 2016, pp. 4165-4169: IEEE.
[40] K. Shiraga, Y. Makihara, D. Muramatsu, T. Echigo, and Y. Yagi, "GEINet: View-invariant gait recognition using a convolutional neural network," in 2016 International Conference on Biometrics (ICB), 2016, pp. 1-8.
[41] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, 2012, pp. 1097-1105.
[42] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio, "Maxout networks," arXiv preprint arXiv:1302.4389, 2013.
[43] D.-A. Clevert, T. Unterthiner, and S. Hochreiter, "Fast and accurate deep network learning by exponential linear units (elus)," arXiv preprint arXiv:1511.07289, 2015.
[44] J. Duchi, E. Hazan, and Y. Singer, "Adaptive subgradient methods for online learning and stochastic optimization," Journal of Machine Learning Research, vol. 12, no. Jul, pp. 2121-2159, 2011.
[45] M. D. Zeiler, "ADADELTA: an adaptive learning rate method," arXiv preprint arXiv:1212.5701, 2012.
[46] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
[47] M. Lin, Q. Chen, and S. Yan, "Network in network," arXiv preprint arXiv:1312.4400, 2013.
[48] R. Alp Güler, N. Neverova, and I. Kokkinos, "Densepose: Dense human pose estimation in the wild," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7297-7306.
[49] N. Takemura, Y. Makihara, D. Muramatsu, T. Echigo, and Y. Yagi, "Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition," IPSJ Transactions on Computer Vision and Applications, vol. 10, no. 1, p. 4, 2018.
[50] H. Iwama, M. Okumura, Y. Makihara, and Y. Yagi, "The ou-isir gait database comprising the large population dataset and performance evaluation of gait recognition," IEEE Transactions on Information Forensics and Security, vol. 7, no. 5, pp. 1511-1521, 2012.
[51] C. A. o. S. Institute of Automation. (2001). CASIA Gait Database [Online]. Available: http://www.sinobiometrics.com
[52] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, "Realtime multi-person 2d pose estimation using part affinity fields," arXiv preprint arXiv:1611.08050, 2016.
[53] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
[54] F. Schroff, D. Kalenichenko, and J. Philbin, "Facenet: A unified embedding for face recognition and clustering," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 815-823.
[55] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song, "Sphereface: Deep hypersphere embedding for face recognition," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, vol. 1.
[56] H. Wang et al., "CosFace: Large margin cosine loss for deep face recognition," arXiv preprint arXiv:1801.09414, 2018.
[57] J. Deng, J. Guo, and S. Zafeiriou, "Arcface: Additive angular margin loss for deep face recognition," arXiv preprint arXiv:1801.07698, 2018.
[58] A. Hermans, L. Beyer, and B. Leibe, "In defense of the triplet loss for person re-identification," arXiv preprint arXiv:1703.07737, 2017.
[59] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[60] D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, "Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8934-8943.
[61] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, "Flownet 2.0: Evolution of optical flow estimation with deep networks," in IEEE conference on computer vision and pattern recognition (CVPR), 2017, vol. 2, p. 6.
[62] Y. Fu et al., "Horizontal Pyramid Matching for Person Re-identification," arXiv preprint arXiv:1804.05275, 2018.
[63] A. Ranjan, J. Romero, and M. J. Black, "Learning Human Optical Flow," arXiv preprint arXiv:1806.05666, 2018.
指導教授 陳慶瀚(Ching-Han Chen) 審核日期 2019-1-30
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明