博碩士論文 100522084 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:53 、訪客IP:3.143.244.6
姓名 劉書銘(Shu-ming Liu)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 以深度資訊為基礎的寬鬆靜態手勢辨識
(Depth-based Loose Static Gesture Recognition)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近年來,手勢辨識已成為重要的研究議題;廣泛應用在遊戲控制、家電操作、機器人控制等操作。然而,一些手勢辨識應用容易受到環境光源及背景影響,亦或是受限於端正標準手勢,其手指不彎曲,讓使用者無法舒適使用。因此,一個不受環境光源及背景影響的寬鬆手勢允許小幅度上下左右轉動的辨識系統是被需要的。
本論文的研究目的是建立一套以深度資訊為基礎的寬鬆靜態手勢辨識系統,可以辨識十五種常用手勢。我們所提出的辨識系統主要分為兩個部份:手部偵測與手勢辨識。在手部偵測階段,我們從 Kinect 只擷取深度影像,解決手部偵測受到環境光源及背景影響的問題;並使用 Kinect SDK 提供的人體骨架定位手部區域。在手勢辨識中,我們先建立輪廓與掌心距離的一維函數 (signature),開始偵測與篩選手部特徵點;並且根據手部輪廓上的特徵點以定位手指根部的掌指關節 (Metacarpophalangeal Joints, MCP) 位置;接著,計算每根手指的 MCP、掌心與判斷基準點所構成的夾角,作為手指角度特徵;並建立五根手指角度差異表作為手勢辨識的基準;最後使用手指數量與手指識別角度來辨識寬鬆靜態手勢。
本研究實驗展示,系統整體手指數量辨識正確率有 95% 以上,在辨識標準手勢的正確率能達到 90.03%,而辨識寬鬆手勢正確率也達到 80.24%,其辨識寬鬆手勢使用多邊形逼近的輪廓優於原始鋸齒狀的輪廓。
摘要(英) In recent year, gesture recognition is an important issue in the field of human computer interaction. The most commonly used applications include game control, home appliances control, robot control, etc. Moreover, due to the effect of lighting, complex backgrounds and the restricted standard gestures without curvature of the fingers, some gesture recognition methods systems are neither intuitive nor comfortable for users. Thus, a loose gesture recognition system against lighting and complex backgrounds is needed.
The purpose of this thesis is to develop a depth-based loose static gesture recognition system which could recognize fifteen common gestures. The proposed gesture recognition system includes two parts: hand detection and gesture recognition. In hand detection, we only capture the depth map against lighting and complex backgrounds from Kinect, and then we locate the hand region from the depth map with skeleton tracking using Kinect SDK. In gesture recognition, first, we create a signature which is a 1-D functional representation of the hand boundary. It is formed by plotting the distance from the center of palm to the hand boundary. Second, we detect features in the signature, and then we locate the Metacarpophalangeal joints of each finger from the hand region. Third, we calculate the angle of each finger by specifying three points: the center of palm at the vertex and then the anchor point and the Metacarpophalangeal joint on the rays. Finally, the system will identify each finger based on an angle table, using the number of fingers and their angles to determine the loose static gesture.
In this thesis, experiments show the accuracy is up to 95% in finger counting. In recognition accuracy, 90.03% for standard gestures, 80.24% for loose gestures, and hand shape using polygonal approximation is better than original hand shape.
關鍵字(中) ★ Kinect感應器
★ 深度影像
★ 寬鬆
★ 靜態
★ 手勢辨識
★ 人機互動
關鍵字(英) ★ Kinect
★ Depth Image
★ Loose
★ Static
★ Gesture Recognition
★ Human Computer Interaction
論文目次 摘要 ..................................... ii
Abstract ................................ iii
致謝 ...................................... v
目錄 ..................................... vi
圖目錄 ................................. viii
表目錄 ................................. xiii
第一章 緒論 ............................... 1
1.1 研究動機與目的 ...................... 1
1.2 系統概述 ............................ 5
1.3 論文架構 ............................ 7
第二章 相關研究 ........................... 9
2.1 深度圖取像設備簡介 .................. 9
2.2 手勢擷取與分析方法 ................. 12
2.2.1 基本定義 ..................... 12
2.2.2 視覺式辨識方法 ............... 12
2.2.3 手套式辨識方法 ............... 14
2.3 手勢辨識系統 ....................... 18
2.3.1 手部偵測 ..................... 18
2.3.2 手部特徵擷取 ................. 23
2.3.3 手勢分類 ..................... 25
第三章 手部偵測 .......................... 27
3.1 手部定位 ........................... 27
3.2 手部區域分割 ....................... 29
3.2.1 定義 ROI 區域 ................ 29
3.2.2 手部區域擷取 ................. 31
3.2.3 輪廓搜尋 ..................... 36
3.3 手腕分割 ........................... 38
第四章 手勢辨識 .......................... 39
4.1 特徵點擷取 ......................... 39
4.1.1 距離轉換 ..................... 39
4.1.2 建立輪廓與掌心距離的一維函數 (signature) .. 42
4.1.3 手部特徵點擷取 ............... 44
4.1.4 手部特徵點篩選 ............... 50
4.2 手指辨識 ........................... 61
4.2.1 手勢定義 ..................... 61
4.2.2 手指 MCP 掌指關節定位 ........ 62
4.2.3 手指幾何特性驗證 ............. 66
4.2.4 手指識別 ..................... 67
第五章 實驗與討論 ........................ 71
5.1 實驗設備與架設環境 ................. 71
5.2 實驗 ............................... 73
5.2.1 手指數量偵測 ................. 73
5.2.2 手勢辨識結果 ................. 75
第六章 結論與未來展望 .................... 88
參考文獻 ................................. 89
參考文獻 [1] Amma, C., M. Georgi, and T. Schultz, "Airwriting: hands-free mobile text input by spotting and continuous recognition of 3d-space handwriting with inertial sensors," in Proc. 16th Int. Symp. on Wearable Computers, Newcastle, UK, Jun.18-22 2012, pp.52-59.
[2] Biswas, K. K. and S. K. Basu, "Gesture recognition using Microsoft Kinect," in Proc. 5th Int. Conf. on Automation, Robotics and Applications, Wellington, New Zealand, Dec.6-8, 2011, pp.100-103.
[3] Budiono, Y., The Study of Robot Hands and Its Applications, Master Thesis, Computer Science Infomation Engineering Dept., National Central Univ., Jongli, Taiwan, 2008.
[4] Bullock, I. M., J. Borras, and A. M. Dollar, "Assessing assumptions in kinematic hand models: A review," in Proc. Int. IEEE Conf. on Biomedical Robotics and Biomechatronics, Rome, Italy, Jun.24-27, 2012, pp.139-146.
[5] Cerlinca, T. J. and S. G. Pentiuc, "Robust 3D hand detection for gestures recognition," in Proc. Conf. on Intelligent Distributed Computing, Salamanca, Spain, Apr.6-8, 2011, pp.259-264.
[6] Chen, C.-P., Y.-T. Chen, P.-H. Lee, Y.-P. Tsai, and S. Lei, "Real-time hand tracking on depth images," in Proc. IEEE Conf. on Visual Communications and Image Processing, Tainan, Taiwan, Nov.6-9, 2011, pp.1-4.
[7] Corradini, A., "Real-time gesture recognition by means of hybrid recognizers," in Proc. Int. Workshop on Sign Lang. Human-Computer Interaction, London, UK, Apr.18-20, 2002, pp.34-46.
[8] Dardas, N. H. and N. D. Georganas, "Real-time hand gesture detection and recognition using bag-of-features and support vector machine techniques," IEEE Trans. Instrumentation and Measurement, vol.60, no.11, pp.3592-3607, 2011.
[9] Dipietro, L., A. M. Sabatini, and P. Dario, "A survey of glove-based systems and their applications," IEEE Trans. Systems, Man, and Cybernetics, vol.38, no.4, pp.461-482, 2008.
[10] Du, H. and T. To, Hand Gesture Recognition using Kinect, PhD Diss., Elect. and Computer Eng. Dept., Univ. of Boston, Boston, MA, 2011.
[11] Elgammal, A., C. Muang, and D. Hu, Skin Detection a Short Tutorial, PhD Diss., Computer Science Dept., Univ. of Rutgers, Piscataway, NJ, 2009.
[12] Erol, A., G. Bebis, M. Nicolescu, R. D. Boyle, and X. Twombly, "Vision-based hand pose estimation: a review," Computer Vision and Image Understanding, vol.108, no.1–2, pp.52-73, 2007.
[13] Guo, J.-M., Y.-F. Liu, C.-H. Chang, and H.-S. Nguyen, "Improved hand tracking system," IEEE Trans. Circuits and Systems for Video Technology, vol.22, no.5, pp.693-701, 2012.
[14] Kristensson, P. O., T. Nicholson, and A. Quigley, "Continuous recognition of one-handed and two handed gestures using 3d full-body motion tracking sensors," in Proc. Int. ACM Conf. on Intelligent User Interfaces, Lisbon, Portugal, Feb.14-17, 2012, pp.89-92.
[15] Lee, B. and J. Chun, "Interactive manipulation of augmented objects in marker-less AR using vision-based hand interaction," in Proc. 7th Int. Conf. on Information Technology New Generations, Las Vegas, NV, Apr.12-14, 2010, pp.398-403.
[16] Li, H., L. Yang, X. Wu, S. Xu, and Y. Wang, "Static hand gesture recognition based on HOG with Kinect," in Proc. 4th Int. IEEE Conf. on Intelligent Human-Machine Systems and Cybernetics, Nanchang, Jiangxi, Aug.26-27, 2012, pp.271-273.
[17] Liang, R.-H. and M. Ouhyoung, "A real-time continuous gesture recognition system for sign language," in Proc. Third Int. IEEE Conf. on Automatic Face and Gesture Recognition, Nara, Japan, Apr.14-16, 1998, pp.558-567.
[18] Minnen, D. and Z. Zahoor, "Towards robust cross-user hand tracking and shape recognition," in Proc. Int. IEEE Conf. on Computer Vision Workshops, Barcelona, Spain, Nov.6-13, 2011, pp.1235-1241.
[19] Mitra, S. and T. Acharya, "Gesture recognition : a survey," IEEE Trans. Systems, Man, and Cybernetics, vol.37, no.3, pp.311-323, 2007.
[20] Oikonomidis, I., N. Kyriazis, and A. Argyros, "Efficient model-based 3D tracking of hand articulations using Kinect," in Proc. 22th Conf. on British Machine Vision, Guildford, UK, Aug.29-Sep.1, 2011, pp.1-11.
[21] Qing, C., N. D. Georganas, and E. M. Petriu, "Hand gesture recognition using haar-like features and a stochastic context-free grammar," IEEE Trans. Instrumentation and Measurement, vol.57, no.8, pp.1562-1571, 2008.
[22] Raheja, J.-L., A. Chaudhary, and K. Singal, "Tracking of fingertips and centers of palm using Kinect," in Proc. 3th Int. Conf. on Computational Intelligence, Modelling and Simulation, Langkawi, Malaysia, Sep.20-22, 2011, pp.248-252.
[23] Ramirez-Giraldo, D., S. Molina-Giraldo, A. M. Alvarez-Meza, G. Daza-Santacoloma, and G. Castellanos-Dominguez, "Kernel based hand gesture recognition using kinect sensor," in Proc. 17th Symp. on Image, Signal Processing, and Artificial Vision, Antioquia, Colombia, Sep.12-14, 2012, pp.158-161.
[24] Roth, P. M. and M. Winter, Survey of Appearance-based Methods for Object Recognition, PhD Diss., Computer Graphics and Vision Inst., Univ. Tech. of Graz, Graz, Austria, 2008.
[25] Sanli, S. G., D. K. Emine, B. Neslihan, T. O. Esin, M. G. Bozkir, S. Roger, E. Hamza, and O. Ozkan, "Stature estimation based on hand length and foot length," Clinical Anatomy, vol.18, no.8, pp.589-596, 2005.
[26] Suzuki, S. and K. be, "Topological structural analysis of digitized binary images by border following," Computer Vision, Graphics, and Image Processing, vol.30, no.1, pp.32-46, 1985.
[27] Teixeira, J. M., B. Reis, S. Macedo, and J. Kelner, "Open closed hand classification using kinect data," in Proc. 14th Symp. on Virtual and Augmented Reality, Rio Janiero, Brazil, May.28-31, 2012, pp.18-25.
[28] Trindade, P., J. Lobo, and J.-P. Barreto, "Hand gesture recognition using color and depth images enhanced with hand angular pose data," in Proc. IEEE Conf. on Multisensor Fusion and Integration for Intelligent Systems, Hamburg, Germany, Sep.13-15, 2012, pp.71-76.
[29] Wachs, J. P., K. Mathias, lsch, S. Helman, and E. Yael, "Vision-based hand-gesture applications," ACM Communications, vol.54, no.2, pp.60-71, 2011.
[30] Wise, S., W. Gardner, E. Valainis, Y. Wong, K. Glass, J. Drace, and J.-M. Rosen, "Evaluation of a fiber optic glove for self automated goniometric measurements," Journal Rehabilitation Research and Development, vol.27, no.4, pp.411-424, 1990.
[31] Yi, L., "Multi-scenario gesture recognition using Kinect," in Proc. 17th Int. Conf. on Computer Games, Louisville, KY, July.30-Aug. 1, 2012, pp.126-130.
[32] Zhou, R., J. Yuan, and Z. Zhang, "Robust hand gesture recognition based on finger-earth mover’s distance with a commodity depth camera," in Proc. 19th Int. ACM Conf. on Multimedia, New York, NY, Nov.28-Dec.01, 2011, pp.1093-1096.
指導教授 曾定章(Din-chang Tseng) 審核日期 2013-8-26
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明