博碩士論文 104327031 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:99 、訪客IP:3.136.236.3
姓名 徐銜鴻(HSIEN HUNG HSU)  查詢紙本館藏   畢業系所 光機電工程研究所
論文名稱 基於手部骨架和深度信息的靜態手勢即時辨識研究與應用
(Research and Application of Real-Time Recognition of Static Hand Gesture Based on Information of Hand Skeleton and Depth)
相關論文
★ MOCVD晶圓表面溫度即時量測系統之開發★ MOCVD晶圓關鍵參數即時量測系統開發
★ 應用螢光顯微技術強化RDL線路檢測系統★ 基於人工智慧之PCB瑕疵檢測技術開發
★ 基於 YOLO 物件辨識技術之 PCB 多類型瑕疵檢測模型開發★ 全場相位式表面電漿共振技術
★ 波長調制外差式光柵干涉儀之研究★ 攝像模組之影像品質評價系統
★ 雷射修整之高速檢測-於修整TFT-LCD SHORTING BAR電路上之應用★ 光強差動式表面電漿共振感測術之研究
★ 準共光程外差光柵干涉術之研究★ 波長調制外差散斑干涉術之研究
★ 全場相位式表面電漿共振生醫感測器★ 利用Pigtailed Laser Diode 光學讀寫頭在角度與位移量測之研究
★ 複合式長行程精密定位平台之研究★ 紅外波段分光之全像集光器應用
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本研究以開發一個即時辨識手勢系統為目的。以新興的Intel® RealSense 3D相機技術並結合支持向量機器(Support vector machine)來做資料分類達到即時辨識手勢。
在現今的科技發展手勢操作在各項產業中扮演著不可或缺的角色,其中在智慧型手機、平板電腦、個人電腦、筆記型電腦、電視等,都可能在未來的發展中置入具有深度學習功能的智慧型相機,並藉由手勢控制改變目前的人機互動體驗,因此本研究藉由Intel® RealSense 3D相機來探討手勢辨識研究。
本研究以Intel® RealSense 3D相機擷取手部關節點,擷取的手部關節點為22個含有世界三維座標的深度點,並藉由台灣大學林智仁教授所開發的LIBSVM將所擷取的手部關節點做訓練產生手勢模型,接下來將Intel® RealSense 3D相機擷取的手部關節點導入由LIBSVM生成的手勢模型做比較即可得到分類結果
使用本研究所開發的人機介面程式,可以辨識10種數字手勢且平均辨識率達99.5%,並將11個英文字母手勢應用於手勢打字,以驗證本研究之正確性。
摘要(英) Gestures control technology is becoming more advanced. In the future, smart phones, Tablet, personal computers, laptops, TVs, etc., will utilize smart cameras with deep learning capabilities to recognize gestures. Using gesture control changes the human-computer interaction experience, so this study explored gesture recognition using Intel® RealSense 3D cameras, and the goal is to develop a Real-Time Recognition of Static Hand Gesture system.
The data for the recognition gestures is classified by using Intel® RealSense 3D camera technology combined with Support Vector Machine. In this study, hand joints were imported using Intel® RealSense 3D camera. The extracted 22 hand joints includes the world′s three-dimensional coordinates using LIBSVM developed by Professor Lin of Taiwan University, and the extracted hand joints are trained to generate a gesture model. The gesture taken by the Intel® RealSense 3D camera are compared to the LIBSVM generated gesture model to receive a classification result.
After implementing the human-computer program developed in this study, on average of 99.5% of numeric gestures were correctly identified, and 11 alphabets were recognized to conduct gesture typing.
關鍵字(中) ★ Intel® RealSense 3D相機
★ 深度點資料
★ 支持向量機器
★ LIBSVM
★ 手勢辨識
關鍵字(英) ★ Intel® RealSense 3D Camera
★ Depth Data
★ Support Vector Machine
★ LIBSVM
★ Gesture Recognition
論文目次 摘要I
AbstractII
致謝III
目錄IV
圖目錄VI
表目錄VIII
第一章 緒論 1
1.1研究背景 1
1.2手勢辨識相關文獻回顧 2
1.2.1 以視覺為基礎之方法進行手勢辨識之研究 2
1.2.2 手勢辨識之類別討論 6
1.2.3 以深度資訊為基礎之靜態手勢辨識系統 7
1.3研究目的 11
1.4論文架構 12
第二章 基礎理論 13
2.1 Intel® RealSense 3D相機 13
2.1.1 Intel® RealSense 3D相機坐標系統 16
2.2 Intel® RealSense Software Development Kit 手部追蹤組件 17
2.2.1手部追蹤組件追蹤模式介紹 18
2.2.2取得手部關節點資料 20
2.3手勢的特徵擷取 22
2.4 Support Vector Machine原理 25
2.4.1線性可分SVM 25
2.4.2非線性可分SVM 26
2.5 LIBSVM 27
2.6 小結 27
第三章 系統架構 28
3.1使用IRS 3D相機與IRS SDK進行手部關節點的追蹤 28
3.1.1建立SenseManager介面 28
3.1.2使用PXCMHandData來獲得手部關節點資料 30
3.2使用LIBSVM進行手勢特徵的訓練與建立手勢模型流程 32
3.2.1使用LIBSVM之進行手勢特徵的訓練 32
3.2.2使用LIBSVM之建立手勢模型檔的建立 34
3.3使用LIBSVM進行手勢的辨識與分類流程 38
3.3.1人機介面程式使用LIBSVM工具包將手勢模型與程式結合 38
3.3.2使用LIBSVM進行手勢的辨識與分類 39
3.4人機介面程式與執行流程 41
3.4.1人機介面程式介面介紹 42
3.5.2人機介面程式執行流程 45
第四章 實驗結果與討論 46
4.1實驗設備與系統環境 46
4.2數字手勢辨識實驗 47
4.2.1數字手勢辨識實驗流程 48
4.2.2標準數字手勢辨識實驗結果 49
4.3手指彎曲與辨識率的關係之手勢辨識實驗 54
4.3.1手指彎曲與辨識率實驗流程 58
4.3.2手指彎曲與辨識率實驗結果 59
4.4英文字母手勢 60
4.5手勢辨識之打字應用 65
4.5.1數字手勢之打字應用 65
4.5.2英文字母手勢之打字應用 69
第五章 結論與未來展望 75
5.1結論 75
5.2未來展望 75
參考文獻. 77
參考文獻 參考文獻
[1] P. N. Belhumeur, J. Hespanha, and D. J. Kriegman, "Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711-720, 1997.
[2] J. Davis and M. Shah, "Visual gesture recognition," Proc. of IEE on Vision, Image and Signal Processing, vol. 141, pp. 101-106, 1994.
[3] W. Du and H. Li, "Vision based gesture recognition system with single camera," Proc. of ICSP2000, vol. 2, pp. 1351-1357, 2000.
[4] 沈全發, "機械式手套與虛擬實境之整合研究," 國立成功大學,碩士論文, 2002.
[5] 葉政憲, "手部資訊擷取系統之設計與應用," 國立成功大學,碩士論文, 2005.
[6] C. L. Huang and S. H. Jeng, "A model-based hand gesture recognition system," Machine Vision and Applications, vol. 12, no. 5, pp. 243-258, 2001.
[7] T. S. Huang, Y. Wu, and J. Lin, "3D model-based visual hand tracking," Proc. of the 2002 IEEE Int. Conf. on Multimedia and Expo, vol. 1, pp. 905-908, 2002.
[8] Y. Yasumuro, Q. Chen, and K. Chihara, "3D modeling of human hand with motion constraints," Proc. of the Int. Conf. on 3-D Digital Imaging and Modeling, pp. 275-282, 1997.
[9] J. Lee and T. L. Kunii, "Model-based analysis of hand posture," IEEE Computer Graphics and Applications, vol. 1, no. 5, pp. 77-86, 1995.
[10] C. C. Lien and C. L. Huang, "Model-based articulated hand motion tracking for gesture recognition," Image and Vision Computing, vol. 16, no. 2, pp. 121-134, 1998.
[11] C. C. Lien and C. L. Huang, "The model based dynamic hand posture identification using genetic algorithm," Machine Vision and Applications, vol. 11, pp. 107-121, 1999.
[12] Y. Wu and T. S. Huang, "Capturing articulated human hand motion: A divide-and-conquer approach," Proc. of IEEE Int. Conf. on Computer Vision, pp. 606-611, 1999.
[13] F. Lathuiliere and J. Y. Herve, "Visual tracking of hand posture with occlusion handling " Proc. of the 15th Int. Conf. on Pattern Recognition, vol. 3, pp. 1129-1133, 2000.
[14] S. Y. Ho, Z. B. Huang, and S. J. Ho, "An evolutionary approach for pose determination and interpretation of occluded articulated objects," Proc. of the 2002 Congress on Evolutionary Computation, vol. 2, pp. 1092-1097, 2002.
[15] J. M. Rehg and T. Kanade, "Model-based tracking of self-occluding articulated objects," Proc. of the 5th Int. Conf. on Computer Vision, pp. 612-617, 1995.
[16] T. Heap and D. Hogg, "Towards 3D hand tracking using a deformable model," Proc. of the 2nd Int. Conf. on Automatic Face and Gesture Recognition, pp. 140-145, 1996.
[17] 陳治宇, "虛擬滑鼠:以視覺為基礎之手勢辨識," 國立中山大學,碩士論文, 2011.
[18] 莊義宗, "應用隱藏式馬可夫模式於靜態手勢辨識," 大同大學,碩士論文, 2013.
[19] L. R. Rabiner, "A tutorial on hidden markov models and selected applications in speech recognition," Proceedings of the IEEE, vol. 77, no. 2, pp. 257 - 286, 1989.
[20] 王建中, "鑑別性隱藏式馬可夫模型應用於人臉辨識," 國立成功大學,碩士論文, 2005.
[21] 江明晏, "耦合隱藏式馬可夫模型於雙手手勢辨識," 國立成功大學,碩士論文, 2007.
[22] A. Corradini, "A Real-time gesture recognition by means of hybrid recognizers," Human-Computer Interaction, pp. 34-46, 2002.
[23] R. H. Liang and M. Ouhyoung, "A real-time continuous gesture recognition system for sign language," IEEE Conf. on Automatic Face and Gesture Recognition, pp. 558-567, 1998.
[24] Q. Chen, N. D. Georganas, and E. M. Petriu, "Hand gesture recognition using haar-like features and a stochastic context-free grammar," IEEE Transactions on Instrumentation and Measurement, vol. 57, no. 8, pp. 1562-1571, 2008.
[25] 李健銘, "即時手語辨識," 國立成功大學,碩士論文, 2010.
[26] 劉書銘, "以深度資訊為基礎的寬鬆靜態手勢辨識," 國立中央大學,碩士論文, 2013.
[27] K. K. Biswas and S. K. Basu, "Gesture recognition using Microsoft Kinect," Proc. of the 5th Int. Conf. on Automation, Robotics and Applications, pp. 100-103, 2011.
[28] "Human Computer Interaction : KINECT Sensor," http://www.cs.nccu.edu.tw/~whliao/hci2012/moe-hci-B41-kinect.pptx.
[29] Intel, "Intel® RealSense™ SDK 2016 R3 Documentation," https://software.intel.com/sites/landingpage/realsense/camera-sdk/v2016r3/documentation/html/index.html?doc_devguide_introduction.html.
[30] 李俊明, "雙影像多視角結構光轉三維點資料技術發展," 國立中央大學,碩士論文, 2016.
[31] R. Mangera, "Static gesture recognition using features extracted," Council of Scientific and Industrial Research, 2013.
[32] C. Wang, Z. Lin, and S. C. Chan, "Superpixel-Based hand gesture recognition with Kinect depth camera," IEEE Transactions on Multimedia, vol. 17, no. 1, pp. 29-39, 2015.
[33] C. F. Wu, J. Xie, and L. Yu, "Research and application of gesture recognition based on information of body skeleton and depth," Computer Technology and Development, vol. 26, no. 8, pp. 200-204, 2016.
[34] H. B. Lee, Y. YU, and Y. Chen, "Static gesture recognition method via using RealSense depth information," http://www.paper.edu.cn/releasepaper/content/201703-285, 2017.
[35] N. H. Dardas and N. D. Georganas, "Real-time hand gesture detection and recognition using bag-of-features and support vector machine techniques," IEEE Transactions on Instrumentation and Measurement, vol. 60, no. 11, pp. 3592-3607, 2011.
[36] C. Cortes and V. Vapnik, "Support-Vector Networks," Machine Learning, vol. 20, pp. 273-297, 1995.
[37] "支援向量機," https://zh.wikipedia.org/wiki/%E6%94%AF%E6%8C%81%E5%90%91%E9%87%8F%E6%9C%BA.
[38] 林宗勳, "Support Vector Machines 簡介," http://www.cmlab.csie.ntu.edu.tw/~cyy/learning/tutorials/SVM2.pdf.
[39] C. W. Hsu, C. C. Chang, and C. J. Lin, "A practical guide to Support Vector Classification," https://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf, 2016.
[40] C. C. Chang and C. J. Lin, "LIBSVM: A Library for Support Vector Machines," National Taiwan University, 2013.
[41] "美國手語字母," https://zh.wikipedia.org/wiki/%E7%BE%8E%E5%9C%8B%E6%89%8B%E8%AA%9E%E5%AD%97%E6%AF%8D.

指導教授 李朱育(JU YI LEE) 審核日期 2018-1-29
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明