博碩士論文 995202094 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:38 、訪客IP:18.223.209.129
姓名 曾泓博(Hong-Bo Zeng)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 以視覺為基礎之強韌多指尖偵測與人機介面應用
(Robust Vision-based Multiple Fingertip Detection and Human Computer Interface Application)
相關論文
★ 影片指定對象臉部置換系統★ 以單一攝影機實現單指虛擬鍵盤之功能
★ 基於視覺的手寫軌跡注音符號組合辨識系統★ 利用動態貝氏網路在空照影像中進行車輛偵測
★ 以視訊為基礎之手寫簽名認證★ 使用膚色與陰影機率高斯混合模型之移動膚色區域偵測
★ 影像中賦予信任等級的群眾切割★ 航空監控影像之區域切割與分類
★ 在群體人數估計應用中使用不同特徵與回歸方法之分析比較★ 在夜間受雨滴汙染鏡頭所拍攝的影片下之車流量估計
★ 影像特徵點匹配應用於景點影像檢索★ 自動感興趣區域切割及遠距交通影像中的軌跡分析
★ 基於回歸模型與利用全天空影像特徵和歷史資訊之短期日射量預測★ Analysis of the Performance of Different Classifiers for Cloud Detection Application
★ 全天空影像之雲追蹤與太陽遮蔽預測★ 在全天空影像中使用紋理特徵之雲分類
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 隨著科技的發展,人們逐漸習慣使用更人性化的科技產品,直覺與容易操作的概念變得非常重要,使得利用手勢操作機器的方式也逐漸成為趨勢,取代了傳統的輸入裝置。對於以手勢操作的機器而言,多指尖偵測是很重要的一部分。多指尖偵測的相關研究主要可分為穿戴式(wearable)與非穿戴式(markerless)兩大類,前者因為需要穿戴額外的設備以標記指尖位置,必須隨身攜帶設備,較為不便,而且若需與人共用時會有衛生上的顧慮,所以後者改用膚色偵測找出手部區域,進而找出指尖位置。
本論文選用了非穿戴式的研究方法,以單一攝影機拍攝無穿戴物品的手部操作影像,並找出影像中的指尖位置。但是由於其他非穿戴式的研究因實驗環境設定或是手勢定義的關係,未考慮當手臂進入畫面時可能會產生錯誤偵測結果的問題,所以在此我們將指寬值加入至輪廓曲率(Contour & Curvature)資訊中,並將這些資訊整合成單一數值以表示其對應點為指尖位置的可能性(likelihood),藉由此方法完成多指尖偵測,我們可以解決上述問題。最後再進一步實作出以手勢操作之簡單的人機介面系統。
摘要(英) Intuitive and easy to use interfaces are very important to a successful product. With the development of technology, gesture-based machine interface has gradually become a trend to replace traditional input devices. For gesture-based machine interface, multiple fingertip detection is a crucial step. The studies of multiple fingertip detection can be classified into two main categories, wearable and markerless. For the former, the users need to wear additional equipment to facilitate the fingertip detection. Considering the inconveniency and the hygiene problem for wearable equipment, the latter requires no additional equipment to get the hand regions or the positions of fingertips.
This thesis is a markerless method. We only use a single camera to capture images and locate the fingertip accurately in the images. A lot of markerless approaches limited their experimental environment or gesture definitions. In addition, some of them used contour and distance to centroid information to locate fingertips. Most of these methods made the assumption that only hand regions are in the scene, and didn’t consider the problems that might happen when the arms are also in the scene. In this thesis, we proposed a multiple fingertip detection algorithm based on the likelihood value of Contour and Curvature information with the width data we added, which is more robust and flexible. Finally, we implement a human computer interface system using predefined gestures.
關鍵字(中) ★ 人機介面
★ 電腦視覺
★ 指尖偵測
關鍵字(英) ★ HCI
★ fingertip detection
★ computer vision
論文目次 摘要 I
Abstract II
目錄 III
圖目錄 V
表目錄 VI
第一章 緒論 1
1.1 研究動機 1
1.2 文獻回顧 2
1.3 系統流程 5
1.4 論文架構 6
第二章 提取手部區域 8
2.1 移動物偵測 8
2.2 膚色偵測 10
2.3 背景影像更新 11
第三章 多指尖偵測方法 13
3.1 輪廓曲率向量取樣 13
3.2 計算曲率資訊 14
3.3 依方向分群 17
3.4 選取群集代表點 19
3.5 指尖驗證 20
3.6 錯誤修正及補償 26
第四章 人機介面設計 30
4.1 圖形介面及程式簡介 30
4.2 功能介紹 32
第五章 實驗結果與分析 36
5.1 指尖偵測實驗 36
5.2 人機介面應用程式實驗 41
第六章 結論與未來研究方向 43
參考文獻 44
參考文獻 [1] "Mouse," [Online]
Available: http://www.7net.com.tw/7net/rui005.faces?ID=111100282044.
[2] "Writing Pad," [Online]. Available: http://buy.sina.com.tw/12078.htm.
[3] "SixSense-a wearable gestural interface," 2009. [Online]
Available: http://www.pranavmistry.com/projects/sixthsense/.
[4] S. K. Kane, D. Avrahami, W. J. O, B. Harrison, A. D. Rea, M. Philipose and A. LaMarca, "Bonfire: A Nomadic System for Hybrid Laptop-tabletop Interaction," in UIST’’09: Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology, Victoria, British Columbia, Canada, 2009.
[5] D. Schmidt, M. K. Chong and H. Gellersen, "HandsDown: Hand-contour-based User Identification for Interactive Surfaces," in NordiCHI’’10: Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, New York, 2010.
[6] A. D. Wilson, "PlayAnywhere: A Compact Interactive Tabletop Projection-Vision System," in UIST’’05: Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology, Seattle, Washington, USA, 2005.
[7] K. Oka, Y. Sato and H. Koike, "Real-time Fingertip Tracking and Gesture Recognition," in IEEE Computer Graphics and Applications, 2002.
[8] C. von Hardenberg and F. Be’’rard, "Bare-hand Human-computer Interaction," in PUI’’01: Proceedings of the 2001 Workshop on Perceptive User Interfaces, New York, USA, 2001.
[9] C. Hsieh, D. Liou, Y. Cheng and F. Cheng, "Robust Visual Mouse by Motion History Image," in International Conference on System Science and Engineering, 2010.
[10] "Augmented Reality," [Online]
Available: http://en.wikipedia.org/wiki/Augmented_reality.
[11] S. Boring, D. Baur, A. Butz, S. Gustafson and P. Baudisch, "Touch Projector: Mobile Interaction through Video," in CHI’’10: Proceedings of the 28th International Conference on Human Factors in Computing Systems, Atlanta, Georgia, USA, 2010.
[12] C. Liao, H. Tang, Q. Liu, P. Chiu and F. Chen, "FACT: Fine-grained Cross-media Interaction with Documents via a Portable Hybrid Paper-laptop Interface," in MM’’10: Proceedings of the International Conference on Multimedia, Firenze, Italy, 2010.
[13] C. Liao, F. Guimbreti`ere, K. Hinckley and J. Hollan, "Papiercraft: A Gesture-based Command System for Interactive Paper," ACM Transactions on Computer-Human Interaction , vol. 14, no. 4, pp. 1-27, 2008.
[14] T. Lee and T. Hollerer, "Handy AR: Markerless Inspection of Augmented Reality Objects Using Fingertip Tracking," in IEEE International Symposium on Wearable Computers, Boston, 2007.
[15] S. Do-Lenh, F. Kaplan, A. Sharma and P. Dillenbourg, "Multi-finger Interactions with Papers on Augmented Tabletops," in TEI’’09: Proceedings of the 3rd International Conference on Tangible and Embedded Interaction, 2009.
[16] M. Lee, R. Green and M. Billinghurst, "3D Natural Hand Interaction for AR Applications," in Image and Vision Computer, New Zealand, 2008.
[17] Z. Zhang, Y. Wu, Y. Shan and S. Shafer, "Visual Panel: Virtual Mouse, Keyboard, and 3D Controller with an Ordinary Piece of Paper," in PUI’’01: Proceedings of the 2001 Workshop on Perceptive User Interfaces, 2001.
[18] S. S. Rautaray and A. Agrawal, "A Novel Human Computer Interface Based on Hand Gesture Recognition Using Computer Vision Techniques," in IITM’’10: Proceedings of the First International Conference on Intelligent Interactive Technologies and Multimedia, Allahabad, UP, India, 2010.
[19] Y. Fang, K. Wang, J. Cheng and H. Lu, "A Real-time Hand Gesture Recognition Method," in 2007 IEEE International Conference on Multimedia and Expo., 2007.
[20] C. Ng and S. Rangganath, "Real-time Gesture Recognition System and Application," in Image and Vision Computing, 2002.
[21] C. Chuqing, L. Ruifeng and G. Lianzheng, "Real-time Multi-hand Posture Recognition," in International Conference on Computer Design and Applications, 2010.
[22] L. Gupta and S. Ma, "Gesture-based Interaction and Communication: Automated Classification of Hand Gesture Contours," IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews), vol. 31, no. 1, pp. 114-120, Feb. 2001.
[23] C. Harrison, D. Tan and D. Morris, "Skinput: Appropriating the Skin as an Interactive Canvas," Communications of the ACM, vol. 54, no. 8, pp. 111-118, Aug. 2011.
[24] 陳永祚, 虛擬環境下即時三維手勢之介面系統, 國立清華大學資訊工程系碩士論文, 民國九十三年六月.
[25] 洪兆欣, 以軌跡辨識為基礎之手勢辨識系統, 國立中央大學資訊工程研究所碩士論文, 民國九十五年七月.
[26] "Depth Map," [Online]
Available: http://en.wikipedia.org/wiki/Depth_map.
[27] "Kinect," [Online]. Available: http://www.xbox.com/zh-TW/kinect.
[28] 柯志函, 無穿戴即時三維指向偵測暨指向手勢辨識系統設計與開發, 國立台灣大學工業管理所碩士論文, 民國九十八年一月.
[29] E. Larson, G. Cohn, S. Gupta, X. Ren, B. Harrison, D. Fox and S. N. Patel, "HeatWave: Thermal Imaging for Surface User Interaction," in CHI’’11: Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 2011.
[30] S. Wu, Y. Zhang, S. Zhang, X. Ye, Y. Cai, J. Zheng, S. Ghosh, W. Chen and J. Zhang, "2D Motion Detection Bounded Hand 3D Trajectory Tracking and Gesture Recognition under Complex Background," in VRCAI’’10: Proceedings of the 9th ACM SIGGRAPH Conference on Virtual-Reality Continuum and its Applications in Industry, Seoul, South Korea, 2010.
[31] 黃俊捷, 互動雙足式機器人之設計與實現(I)手勢辨識, 國立中央大學電機工程學系碩士論文, 民國九十七年六月.
[32] 李經寧, 即時手勢辨識系統應用於機上盒控制, 國立中央大學資訊工程學系碩士論文, 民國九十八年十二月.
[33] X. Yin and X. Zhu, "Hand Posture Recognition in Gesture-based Human-robot Interaction," in IEEE International Conference on Industrial Electronics and Applications, 2006.
[34] Z. Pan, Y. Li, M. Zhang, C. Sun, K. Guo, X. Tang and S. Zhou, "A Real-time Multi-cue Hand Tracking Algorithm Based on Computer Vision," in IEEE International Conference on Virtual Reality VR, 2010.
[35] A. Argyros and M. Lourakis, "Vision-based Interpretation of Hand Gestures for Remote Control of a Computer Mouse," in Proceedings of the International Wrorkshop on Computer Vision in HCI, 2006.
[36] S. Malik and J. Laszlo, "Visual Touchpad: A Two-handed Gestural Input Device," in ICMI’’04: Proceedings of the 6th International Conference on Multimodal Interfaces, 2004.
[37] "技術報導-淺談影像監控之背景建立技術," [Online]
Available: http://140.113.87.112/vol_2/skill_7.htm.
[38] 鄭凱方, 人臉可辨識度計算用於監控系統中人臉正面最佳影像, 國立中央大學資訊工程研究所碩士論文, 民國九十四年六月.
[39] "Connected Component Labeling," [Online]
Available: http://en.wikipedia.org/wiki/Connected-component_labeling.
[40] M. Khoury and H. Liu, "Using Fuzzy Gaussian Inference and Genetic Programming to Classify 3D Human Motions," in UKCI’’08: Proceedings of the 8th Annual UK Workshop on Computational Intelligence, 2008.
指導教授 鄭旭詠(Hsu-Yung Cheng) 審核日期 2012-7-25
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明