博碩士論文 965202071 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:17 、訪客IP:35.171.183.163
姓名 黃信榮(Shin-zung Huang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於立體視覺手勢辨識的人機互動系統
(Human-Machine Interaction Using Stereo Vision-based Gesture Recognition)
相關論文
★ 整合GRAFCET虛擬機器的智慧型控制器開發平台★ 分散式工業電子看板網路系統設計與實作
★ 設計與實作一個基於雙攝影機視覺系統的雙點觸控螢幕★ 智慧型機器人的嵌入式計算平台
★ 一個即時移動物偵測與追蹤的嵌入式系統★ 一個固態硬碟的多處理器架構與分散式控制演算法
★ 整合仿生智慧行為控制的機器人系統晶片設計★ 嵌入式無線影像感測網路的設計與實作
★ 以雙核心處理器為基礎之車牌辨識系統★ 基於立體視覺的連續三維手勢辨識
★ 微型、超低功耗無線感測網路控制器設計與硬體實作★ 串流影像之即時人臉偵測、追蹤與辨識─嵌入式系統設計
★ 一個快速立體視覺系統的嵌入式硬體設計★ 即時連續影像接合系統設計與實作
★ 基於雙核心平台的嵌入式步態辨識系統★ Gigabit乙太網路的UDP/IP硬體加速器設計
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 目前消費型電子產品朝向更便利、人性化的人機互動界面發展。不經由直接碰觸,而藉身體姿態與機器進行互動,已經成為一個重要的研究趨勢。本論文提出一個基於立體視覺手勢辨識的人機互動系統,將雙攝影機架設於電腦螢幕上方,鏡頭拍攝螢幕前方空間。假設操控電腦的手掌為距離螢幕最近的物體,利用類神經網路求得像差與影像深度的映射模型,接著針對所找出之最近的物體進行手指偵測,並藉由手指的數目與位置定義出不同靜態手勢,最後建立以不同手勢間狀態轉移與狀態執行動作的動態手勢模型,系統中不同的動態手勢分別表示不同的人機互動指令。 實驗結果顯示,藉由立體視覺方法可有效且精確地找出手掌位置,無頇耗費過多運算成本於複雜背景中取出手掌。我們的動態手勢模型辨識方法所達到的辨識率與其他研究方法相差不大,但計算複雜度較低,而且當系統要加入新的手勢,只需添加新的狀態轉移描述即可,無需重新訓練動態手勢模型。因此我們的方法為未來不斷衍生的新的人機互動應用,提供了一個高度彈性的、高效率而可靠的人機互動方法。
摘要(英) At present, the development of consumer electronic products focuses on more convenient and friendly interactive interface. Through the body posture interacting with machine without direct touch has become an important research trend. This paper presents a stereo vision-based gesture recognition system of human-computer interaction. Two cameras are set on the computer screen, and the Lens Shooting is in front of screen. Assuming the hand which controls the computer is the nearest object to the screen. We use the neural network to achieve the mapping model of aberrations and image depth. We detect the fingers after the nearest object is found, and the static gestures are defined by the numbers of finger. Finally, the dynamic gesture model is established on the state transfer of different static gestures and the action of states. Different dynamic gestures indicate the different human-computer interaction commands. Based on the stereo vision, the location of hand can be efficiently and accurately identified by the experiments. And we don’t need to waste too much cost on finding the hand in a complex background. Our dynamic gestures recognition efficiency is the same as other research, and the complexity is low than others. When the system wants to add a new gesture, a new description of the state transition is required. It has no need to retrain the dynamic gesture model. Therefore, the system we present here provides a highly flexible, efficient and reliable human-computer interaction method.
關鍵字(中) ★ 人機互動
★ 手勢辨識
關鍵字(英) ★ Human-Computer Interaction
★ Hand Gesture Recognition
論文目次 第一章、緒論 1
1.1 研究背景與目的 1
1.2 研究內容與方法 2
1.3 系統架構與流程 3
1.4 論文架構 3
第二章、文獻探討 4
2.1 人機互動 4
2.2 物體追蹤 5
2.3 手勢辨識 6
2.4 基於立體視覺的人機互動 8
第三章、基於立體視覺手指定位 10
3.1 影像前處理 10
3.1.1 膚色偵測 10
3.1.2 形態學影像處理 13
3.1.3 物件標記 15
3.1.4 移動物體偵測 17
3.2 雙攝影機的立體視覺 19
3.3 手指定位 23
3.3.1 手掌邊緣偵測 23
3.3.2 指尖偵測 25
3.4 動態手勢分析 26
第四章、系統設計與實驗 30
4.2 系統架構 33
4.3 系統功能驗證與實驗 35
4.3.1 距離最近的膚色物體偵測 35
4.3.2 手掌邊緣偵測 36
4.3.3 手指偵測 36
4.3.4 單一手指移動追蹤實驗 38
4.3.5 手部動作分析實驗 39
4.4 相關研究的效能評估與比較 40
4.5 應用系統展示-3D手勢照片瀏覽程式 41
第五章、結論 44
5.1 結論 44
5.2 未來展望 44
參考文獻 45
參考文獻 [1] T.M. Mahmoud. “A New Fast Skin Color Detection Technique”, Processing of World Academy of Science, Engineering And Technology Volume 33 ISSN p.2070-3740 Sept 2008
[2] J.D. Smith, T.C.N. Graham, D. Holman, Jan Borchers “Low-Cost Malleable Surfaces with Multi-Touch Pressure Sensitivity”, In Proc. IEEE tabletop p. 205-208. 2007
[3] D. Wigdor, G. Perm, K. Ryall, A. Esenther and C. Shen, “Living with a Tabletop: Analysis and Observations of Long Term Office Use of a Multi-Touch Table”, IEEE Tabletop, p.60 – 67, 2007
[4] J.B. Hiley, A.H. Redekopp and R. Fazel-Rezai, “A Low Cost Human Computer Interface based on Eye Tracking”, Engineering in Medicine and Biology Society, EMBS '06, 2006
[5] A. Krolak and P. Strumiłło, “Vision-Based Eye Blink Monitoring System for Human-Computer Interfacing”, IEEE conference on Human System Interactions, p.994-998, 2008 [6] G. Shin and J. Chun, “Vision-Based Multimodal Human Computer Interface Based on Parallel Tracking of Eye and Hand Motion”, IEEE International Conference on Convergence Information Technology, p.2443 – 2448, Nov. 2007
[7] A. Jaimes and N. Sebe, “Multimodal Human Computer Interaction: A Survey”, IEEE International Workshop on Human Computer Interaction in conjunction with ICCV, Beijing (China), p.116-134, Oct. 2005
[8] C. Manresa, J. Varona, R. Mas and F. Perales, “Hand tracking and gesture recognition for human-computer interaction”, ELCVIA, vol. 5, no. 3, p.96–104, 2005
[9] K. Oka, Y. Sato and H. Koike, “Real-time tracking of multiple fingertips and gesture recognition for augmented desk interface systems”, In IEEE International Conference on Automatic Face and Gesture Recognition, 2002
[10] S. Lenman, L. Bretzner, B. Thuresson, “Computer Vision Based Hand Gesture Interfaces for Human-Computer Interaction”, Technical report RITANA D0209, CID-report, June 2002
[11] C. Shan, Y. Wei, T. Tan and O. Ojardias, “Real time hand tracking by combining particle filtering and mean shift”, In Proc. Int. Conf. on Auto. Face and Gesture Recognition, p.669–674, 2004
[12] S. Ongkittikul, S. Worrall and A. Kondoz, “Two Hand Tracking using Colour Statistical Model with the K-means Embedded Particle Filter for Hand Gesture Recognition”, Computer Information Systems and Industrial Management Applications, CISIM '08. 7th, p.201-206, June 2008
[13] G.Welch and G. Bishop, “An introduction to the kalman filter”, Technical Report 95-041, University of North Carolina, Department of Computer Science, 1995.
[14] C. Yang, R. Duraiswami and L. Davis, “Fast multiple object tracking via a hierarchical particle filter”, In Proc. of International Conference on Computer Vision and Pattern Recognition, p.212–219, 2005
[15] Y. Li, H. Ai, T. Yamashita, S. Lao and M. Kawade, “Tracking in low frame rate video: A cascade particle filter with discriminative observers of different lifespans”, CVPR, p.1–8, 2007
[16] S. Young, “HTK: Hidden Markov Model Toolkit”, Cambridge Univ. Eng.Dept. Speech Group and Entropic Research Lab. Inc.,Washington, DC,1993
[17] F. Wang, C.W. Ngo and T.C. Pong, “Simulating a Smartboard by Real-Time Gesture Detection in Lecture Videos”, IEEE Transactions on Multimedia, Vol. 10, No. 5, p.926-935, Aug 2008
[18] Q. Chen, N.D. Georganas and E.M. Petriu, “Hand Gesture Recognition Using Haar-Like Features and a Stochastic Context-Free Grammar”, IEEE Transactions on Instrumentation and Measurement, Vol. 57, No. 8, p.1562-1571, Aug 2008
[19] P. Viola and M. Jones, “Robust real-time object detection”, Intl. J. Computer Vision, 57(2):137–154, 2004
[20] K. S. Fu, “Syntactic Pattern Recognition and Application”, Englewood Cliffs, NJ: Prentice-Hall, 1982 [21] J. Kim, J. Park, H. Kwan and K.C. Lee, “HCI (Human Computer Interaction) Using Multi‐Touch Tabletop Display”, In: Communications, Computer and Signal Processing, IEEE, Victoria, p.391 – 394, 2007
[22] Y. Ma and X. Ding, “Robust real-time face detection based on cost-sensitive adaboost method”, In Proc. ICME, volume 2, p.465–472, 2003
[23] P. Premaratne and Q. Nguyen, “Consumer electronics control system based on hand gesture moment invariants”, Computer Vision, IET, p.35–41, 2007
[24] M. Hu, “Visual Pattern Recognition by Moment Invariants”, IEEE Trans. Information Theory, Vol. 8, p.179-187, 1962
[25] H.I. Suk, B.K. Sin and S.W. Lee, “Robust Modeling and Recognition of Hand Gestures with Dynamic Bayesian Network”, IEEE International Conference on Pattern Recognition, p.1-4, Dec. 2008
[26] D. Chai and K.N. Ngan, “Face segmentation using skin-color map in videophone applications”, IEEE Transactions on Circuits and Systems for Video Technology, p.551-564, Jun. 1999
[27] H. Ying, J. Song, X. Ren and W. Wang, “Fingertip detection and tracking using 2D and 3D information”, World Congress on Intelligent Control and Automation, p.1149-1152, Jun. 2008
[28] S. Malik, “Real-time Hand Tracking and Finger Tracking for Interaction”, http://www.cs.toronto.edu/~smalik/downloads/2503_project_report.pdf。Dec. 2003
[29]楊國棟: Wii介紹, 取自http://www2.nuk.edu.tw/lib/e-news/20071101/3-3.htm。
[30]唐國豪, 「人機互動」, 《科學發展》2003年8月,368期,18~23頁
[31]科學人雜誌網站, 取自http://sa.ylib.com/circus/circusshow.asp?FDocNo=10。
[32]Apple iPhone, Available: http://www.apple.com/iphone/technology/。
[33] P. Dietz and D. Leigh, "Diamondtoucn: a multi-user touch technology", in UIST'01: Proceedings of the 14th annual ACM symposium on User interface software and technology. New York, NY, USA: ACM, p.219-226, 2001
[34] S. Hodges, S. Izadi, A. Butler, A. Rrustemi and B. Buxton, "ThinSight: Versatile Multi-touch Sensing for Thin Form-factor Displays", in UIST'07: Proceedings of the 20th ACM symposium on User interface software and technology. Newport, Rhode Island, USA: ACM, p.259-268, 2007
[35] J. Y. Han, "Low-cost multi-touch sensing through frustrated total internal reflection", in UIST'05: Proceedings of the 18th ACM symposium on User interface software and technology. Seattle, Wahington, USA: ACM, p.115-118, 2005
[36] P. Moghadam, W.S. Wijesoma and D.J. Feng, “Improving path planning and mapping based on stereo vision and lidar”, 10th International Conference on Control, Automation, Robotics and Vision, p.384 – 389, 2008
[37] H.H.P. Wu, Y.H. Yu and W.C. Chen,” Projective rectification based on relative modification and size extension for stereo image pairs”, IEE Proceedings -Vision, Image and Signal Processing, p.623 – 633, Oct. 2005
指導教授 陳慶瀚(Ching-han Chen) 審核日期 2009-7-7
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明