博碩士論文 102522110 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:23 、訪客IP:34.239.176.198
姓名 羅冠中(Kuan-zhong Lo)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 即時手勢辨識系統及其於戰場情資顯示平台之應用
(A Real-Time Hand Gesture Recognition System and its Application in a Battlefield Information Platform)
相關論文
★ 以Q-學習法為基礎之群體智慧演算法及其應用★ 發展遲緩兒童之復健系統研製
★ 從認知風格角度比較教師評量與同儕互評之差異:從英語寫作到遊戲製作★ 模糊類神經網路為架構之遙測影像分類器設計
★ 複合式群聚演算法★ 身心障礙者輔具之研製
★ 指紋分類器之研究★ 背光影像補償及色彩減量之研究
★ 類神經網路於營利事業所得稅選案之應用★ 一個新的線上學習系統及其於稅務選案上之應用
★ 人眼追蹤系統及其於人機介面之應用★ 結合群體智慧與自我組織映射圖的資料視覺化研究
★ 追瞳系統之研發於身障者之人機介面應用★ 以類免疫系統為基礎之線上學習類神經模糊系統及其應用
★ 基因演算法於語音聲紋解攪拌之應用★ 虹膜辨識系統之研究與實作
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近年來,手勢辨識在人機互動的相關研究中,吸引了各個領域的專家投入,而常見的應用有遊戲控制、機械手臂操作、機器人控制及家電控制等等。其簡單又直覺的操作方式,將漸漸地取代傳統遙控器和輸入裝置的使用。
本論文提出一種基於深度影像之即時手勢辨識系統,並將系統應用於以NASA world wind為基礎之戰場環境情資平臺的操控。我們所開發的以NASA world wind為基礎之戰場環境情資平臺,除了可顯示最基本的世界地圖資訊之外,它也提供了許多地理與環境資訊,可以滿足國軍基層部隊的需求。
系統實做的方法是先對深度攝影機讀到的深度影像與骨架資訊做必要的前處理後,留下手臂資訊,再用手掌切割演算法將手掌擷取出來,接著利用手掌輪廓上每一個點到手掌質心的距離曲線來當描述手型的特徵,由於這種特徵會受手掌旋轉角度及手掌大小等因素影響,不適合直接當手型辨識特徵使用,故本論文採用快速傅立葉轉換的方式,將該距離曲線取樣後轉換到頻率域上的一組快速傅立葉係數。由於不同的手型會產生不同的係數值,所以本論文採用決定樹的方式,根據係數值來辨識六種手型。之後,再利用這些手型和雙手上下左右揮動的動作加以組合,轉換成操控戰場環境情資平臺所需的六種指令集。
本系統的研發目標是要能提供軍方人員一套新一代的戰場環境平臺,除了可利用傳統的鍵盤與滑鼠的操控之外,還可引進人機介面的最新技術,藉由手勢操控來取代傳統的輸入裝置。最後,本系統之各項手勢操控功能皆有透過各種不同之實驗設計來驗證。在手型正確率實驗中,其正確率達96.1%。在組合手勢偵測與辨識實驗中,即使是在不同角度時,整體平均的辨識正確率亦可高達97.9%。
摘要(英) For the past few years, gesture recognition research in human-computer interaction had attracted experts’ attention in various field, general applications includes gaming control, humanoid robot arms operation, robot control, household appliances control and so on. Due to its convenience and intuitive manipulation, the hand-gesture-based controller will gradually substitute for the traditional remote and input device control.
This thesis presents a real-time hand gesture recognition system based on depth image, and apply the system to the battlefield information platform which based on NASA world wind. The NASA world wind-based battlefield information platform not only displays the information of the world maps but offers a number of geographical and environmental information that satisfy the demands of military.
The implementation of the proposed hand gesture recognition system is as follows. First of all, we locate the arm region from an image captured by Kinect via several necessary depth image preprocessing operators and skeleton tracking operators. Second, use a hand capture algorithm to extract the hand shape from the arm region. We then use the distance curve between hand boundary and the hand center as a feature that describing hand shape. However it’s inappropriate to be used for the hand shape recognition directly since the feature still affected by angle for hand rotation and size of hand. Thus, we use the frequency domain coefficient of the distance curve transformed by Fast Fourier transform. On account that there are different hand shape resulting different Fast Fourier coefficient, the decision tree incorporated with the coefficient is adopted to recognition 6 gestures. Furthermore, arranging those gesture with both hands will be used as commands to control the battlefield information platform.
The NASA world wind-based battlefield information platform not only displays the information of the world maps but offers a number of geographical and environmental information that satisfy the demands of military. The aim of this system is to provide a brand-new battlefield platform for the military. In addition to operating by traditional keyboard and mouse, this system also introduce the latest technology of human-computer interaction. After all, several experiments were designed to evaluate the functionalities of the proposed real-time hand gesture recognition system. In hand gesture experiments, the correct rate is 97.1%. Even at different angles, the correct rate in the commands experiment with both hand is 97.42%.
關鍵字(中) ★ 深度攝影機
★ 人機互動
★ 手勢辨識
關鍵字(英)
論文目次 摘要 i
ABSTRACT iii
誌謝 v
目錄 vi
圖目錄List of Figures viii
表目錄List of Tables x
第一章、緒論 1
1-1 研究動機 1
1-2 研究目的 3
1-3 論文架構 4
第二章、相關研究 5
2-1 手勢辨識 5
2-1-1 感應式手勢辨識 6
2-1-2 影像式手勢辨識 7
2-1-3 手勢相關應用與產品 11
2-2 Kinect for Windows 14
2-2-1 設備規格 14
2-2-2 偵測深度範圍及精確度 17
2-2-3 Kinect應用 19
2-3 NASA World Wind介紹 20
第三章、研究方法與步驟 22
3-1 手型辨識 24
3-1-1 骨架偵測與深度偵測 25
3-1-2 邊緣偵測 27
3-1-3 輪廓描述 28
3-1-4 手型特徵擷取 30
3-2 手勢偵測 33
第四章、戰場環境顯示平台 36
4-1 系統架構 36
4-2 系統介紹 40
4-2-1 一般操作模式 42
4-2-2 新增自訂圖層 43
4-2-3 新增自訂事件 45
4-2-4 新增船隻與雷達通訊範圍 47
4-2-5 匯出文件 48
第五章、實驗結果 49
5-1 手型辨識實驗 49
5-1-1 手型辨識正確率 49
5-1-2 手型混淆 50
5-1-3 手型旋轉辨識率 53
5-2 手勢偵測實驗 54
5-2-1 World Wind 手勢指令正確率 54
5-2-2 耗時比較 55
5-3 問卷結果 56
第六章、結論與未來展望 58
6-1 結論 58
6-2 未來展望 59
參考文獻 60
附錄一 65
附錄二 67
附錄三 69
參考文獻 [1] W. Freeman and M. Roth, “Orientation histograms for hand gesture recognition,” Int. Work. Autom. Face Gesture Recognit, vol. 12, pp. 296–301, 1995.
[2] VRLOCIC Co.. [Online]. Available: http://www.vrlogic.com/html/5dt/5dt_dataglove_5.html. [Accessed: 06-Jun-2015].
[3] Measurand Inc.. [Online]. Available: http://www.shapehand.com/shapehand.html. [Accessed: 03-Jul-2015].
[4] J. M. Rehg and T. Kanade, “DigitEyes: vision-based hand tracking for human-computer interaction,” in Proc. of the Workshop on Motion of Non-Rigid and Articulated Bodies, pp. 16-22, 1994.
[5] E. Ueda, Y. Matsumoto, M. Imai, and T. Ogasawara, “Hand pose estimation for vision-based human interface,” in Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, pp. 473–478, 2001
[6] A. Causo, M. Matsuo, E. Ueda, K. Takemura, Y. Matsumoto, J. Takamatsu, and T. Ogasawara, “Hand pose estimation using voxel-based individualized hand model,” in IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 451-456, 2009.
[7] R. Yang and S. Sarkar, “Gesture Recognition Using Hidden Markov Models from Fragmented Observations,” in Proc. IEEE Conference Computer Vision and Pattern Recognition, 2006
[8] M. Elmezain, A. Al-Hamadi, G. Krell, and S. El-Etriby, “Gesture Recognition for Alphabets from Hand Motion Trajectory Using Hidden Markov Models,” IEEE International Symposium on Signal Processing and Information Technology, pp. 1192-1197, 2007
[9] M. A. Amin and Y. Hong, “Sign language finger alphabet recognition from gabor-PCA representation of hand gestures,” in Proceedings of the Sixth International Conference on Machine Learning and Cybernetics, vol. 4, pp. 2218–2223, 2007.
[10] Y. Fang, J. Cheng, K. Wang, and H. Lu, “Hand Gesture Recognition Using Fast Multi-scale Analysis,” in Proc. of IEEE international Conference on Image and Graphics, 2007.
[11] M. Vafadar and A. Behrad, “Human hand gesture recognition using motion orientation histogram for interaction of handicapped persons with computer,” in Lecture Notes in Computer Science, vol. 5099, pp. 378–385, 2008.
[12] A. F. Bobick and A. D. Wilson, “A state-based approach to the representation and recognition of gesture,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 12, pp. 1325–1337, 1997.
[13] J. F. Lichtenauer, E. A. Hendriks, and M. J. T. Reinders, “Sign language recognition by combining statistical DTW and independent classification,” IEEE Trans. Pattern Anal. Mach. Intell, vol. 30, no. 11, pp. 2040–2046, 2008.
[14] M. V. Lamar, M. S. Bhuiyan, and A. Iwata, “T-CombNET - A Neural Network Dedicated to Hand Gesture Recognition,” Lecture Notes In Computer Science, vol. 1811, pp. 613-622, 2000.
[15] E. Stergiopoulou, N. Papamarkos, and A. Atsalakis, “Hand Gesture Recognition Via a New Self-organized Neural Network,” Prog. Pattern Recognition, vol. 3773, pp. 891–904, 2005.
[16] P. Hong, M. Turk, and T. Huang, “Gesture modeling and recognition using finite state machines,” in Proc. Fourth IEEE International Conference and Gesture Recognition, pp. 410-415, 2000.
[17] M. A. Amin and H. Yan, “Sign Language Finger Alphabet Recognition From Gabor-PCA Representation of Hand Gestures,” in Proc. of the Sixth International Conference on Machine Learning and Cybernetics, pp. 2218-2223, 2007.
[18] 劉東樺,「以適應性膚色偵測與動態歷史影像為基礎之 即時手勢辨識系統 」,私立大同大學資訊工程學系碩士論文,民國九十八年。
[19] P. Premaratne and Q. Nguyen, “Consumer electronics control system based on hand gesture moment invariants,” Computer Vision, Institution of Engineering and Technology, vol.1, no.1, pp. 35-41, Mar. 2007.
[20] Myo. [Online]. Available:
https://www.thalmic.com/myo/. [Accessed: 10-Jun-2015].
[21] Engadget Co.. [Online]. Available:
http://chinese.engadget.com/2008/06/14/toshiba-qosmio-g55-features-spursengine-visual-gesture-controls/.[Accessed: 16-Jun-2015]
[22] Sotouch Co.. [Online]. Available:
http://www.so-touch.com/?id=software&content=air-presenter#/software/air-presenter. [Accessed: 20-Jun-2015]
[23] Xbox Co.. [Online]. Available:
http://www.xbox.com/en-US/kinect. [Accessed: 16-Jun-2015]
[24] Tvvoluse AG Inc.. [Online]. Available:
http://www.win-ni.com/.[Accessed: 18-Jun-2015]
[25] Microsoft Xbox. [Online]. Available:
http://www.microsoft.com/en-us/kinectforwindows/.
[Accessed: 29-May-2015]
[26] Microsoft Xbox. [Online]. Available: http://www.waybeta.com/news/58230/microsoft-kinect-somatosensory-gamedevice-full-disassembly-report-_microsoft-xbox.
[Accessed: 29-May-2015]
[27] Micorsoft Developer Network. [Online]. Available: https://msdn.microsoft.com/en-us/library/hh438998.aspx. [Accessed: 02-May-2015].
[28] Kinect 感應器. [Online]. Available:
http://msdn.microsoft.com/zh-tw/hh367958.aspx. [Accessed: 15-May-2015]
[29] Kinect 深度點密度分布圖. [Online]. Available: http://kheresy.files.wordpress.com/2011/12/depthhistogram.png?w=630. [Accessed: 02-May-2015].
[30] G. Fanelli, J. Gall, and L. V. Gool, “Real Time Head Pose Estimation with Random Regression Forests,” in IEEE Conference on Computer Vision and Pattern Recognition, 2011
[31] NASA World Wind. [Online]. Available: http://worldwind.arc.nasa.gov/features.html. [Accessed: 02-May-2015]
[32] 鄧己正,「以視覺為基礎的人臉辨識理論」,國立中央大學資訊工 程學系碩士論文,民國九十年。
[33] M. Soriano and B. Martinkauppi, “Using the skin locus to cope with changing illumination conditions in color-based face tracking,” in Proc. IEEE Nordic Signal Processing Symposium, pp. 383-386, 2000.
[34] 林文章,「不同場景的膚色偵測與臉部定位」,國立中央大學電機工程學系碩士論文,民國九十八年。
[35] M. Hu, S. Worrall, a H. Sadka, and a a Kondoz, “Face feature detection and model design for 2D scalable model-based video coding,” Int. Conf. Vis. Inf. Eng. VIE 2003 Ideas Appl. Exp., vol. 2003, no. 1, pp. 125–128, 2003.
[36] 蘇芳生,「人臉表情辨識系統」,國立中正大學通訊工程學系碩士論文,民國九十三年。
[37] 曾郁展,「DSP-Based 之即時人臉辨識系統」,國立中山大學電機工程學系碩士論文,民國九十四年。
[38] 蔡嵩陽,「即時手型辨識系統及其於家電控制之應用」,國立中央大學資訊工程學系碩士論文,民國一百年。
[39] J. Sauro, “A Practical Guide to the System Usability Scale: Background, Benchmarks & Best Practices.” , 2014
指導教授 蘇木春 審核日期 2015-7-28
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明