博碩士論文 102522042 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:16 、訪客IP:3.238.184.78
姓名 林鼎國(Ting-Kuo Lin)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於類神經網路之即時虛擬樂器演奏系統
(Real-Time Virtual Instruments Based On Neural Network System)
相關論文
★ 基於edX線上討論板社交關係之分組機制★ 利用Kinect建置3D視覺化之Facebook互動系統
★ 利用 Kinect建置智慧型教室之評量系統★ 基於行動裝置應用之智慧型都會區路徑規劃機制
★ 基於分析關鍵動量相關性之動態紋理轉換★ 基於保護影像中直線結構的細縫裁減系統
★ 建基於開放式網路社群學習環境之社群推薦機制★ 英語作為外語的互動式情境學習環境之系統設計
★ 基於膚色保存之情感色彩轉換機制★ 一個用於虛擬鍵盤之手勢識別框架
★ 分數冪次型灰色生成預測模型誤差分析暨電腦工具箱之研發★ 使用慣性傳感器構建即時人體骨架動作
★ 基於多台攝影機即時三維建模★ 基於互補度與社群網路分析於基因演算法之分組機制
★ 即時手部追蹤之虛擬樂器演奏系統★ 即時手部追蹤系統以虛擬大提琴為例
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近幾年來人機互動領域的相關研究越來越頻繁地出現在我們生活周遭,這些相關的應用與研究不但帶給我們生活上的便利也提高使用者的工作效率,使用這些相關應用可以降低成本開銷,同時也讓使用者透過不同類型的人機互動方式以更自然、更迅速且更直觀的操控向電腦傳達需求,隨著科技進步與精密設備的推出現今電腦能夠替使用者完成的工作越來越多元,例如我們隨手可得的智慧型手機只需要輕觸面板就可以執行手機內的應用程式,或是透過攝影機如手機鏡頭 、 Kinect 、 Leap Motion 、 Google Glass …等等設備辨識影像進一步向電腦傳達指令,當電腦接收到指令後替使用者完成指定的工作,近幾年透過新穎人機互動技術與電腦互動的相關研究逐漸變得純熟,而這些相關研究突破傳統與電腦互動及溝通的方式,這也使得人機互動逐漸成為現代人生活的一部分。
在這篇論文將會介紹使用Leap Motion實作虛擬樂器的方法,同時搭配MIDI軟體讓虛擬樂器可以演奏的音色更多,然後我們會先透過幾個手部的特徵資訊訓練類神經網路,再將訓練完的類神經網路加入系統進一步辨別預定好的手勢觸發指定的功能,加入類神經網路後系統依然可以保持即時執行。
摘要(英) Research in the field of Human-Computer Interaction (HCI) has become more and more frequently in our life. These related applications not only make our lives more convenient and efficiency, but also reduce the overhead costs. Users can more naturally, quickly and intuitively convey commands through different types of HCI applications to computer. With advances in computer technology and precision equipment, computers can complete multivariate works for users. For example, the use of smartphones just touching the panel, then the alarm clock, navigation, photograph applications will be executed. And even the use of camera devices like Kinect , Leap Motion , Creative Sen3d , etc. Through recognizing the images obtained from camera devices convey commands to the computer. And the computer complete the assigned work for users when receives commands. In recent years, the researches about innovative technology and human-computer interaction become skillful. These studies break the traditional way of interacting with a computer also makes HCI becoming a part of life.
Leap Motion was used to captured the hand information of users in this paper. Further recognize the hand gesture of users to reduce the burden of operating virtual instrument. We train a neural network to analyze the information captured from the Leap Motion, then convey commands that we predefined to computer. Finally, this paper will show that our system could maintain in real time and stable state.
關鍵字(中) ★ 手勢辨別
★ 類神經網路
★ 虛擬樂器
關鍵字(英) ★ Leap Motion
★ Hand Gesture Recognition
★ Neural Network
★ Virtual Instruments
論文目次 摘要 i
Abstract ii
Acknowledgements iii
Contents iv
List of Figures vi
List of Tables viii
Chapter 1. Introduction 1
1.1 Background 1
1.2 Motivation 2
1.3 Thesis Organization 4
Chapter 2. Related Works 5
2.1 Wearable devices 6
2.2 RGB camera 8
2.3 RGB-D / Depth camera and neural network 9
2.4 Dynamic time warping 12
2.5 Self-organizing maps (SOM) 13
2.6 Support Vector Machine (SVM) 15
Chapter 3. Proposed Method 18
3.1 Neural Network 19
3.2 Operation of System 22
3.3 The Predefined Hand Gestures 24
3.4 Features 25
3.5 Backpropagation Algorithm (BP) 26
3.6 Training neural network 29
3.7 Hand Gestures Recognition System Flow 31
3.8 Recognizing Problem 32
3.9 Recognizing problem exclude 33
Chapter 4. Applications 36
4.1 User interface 36
4.2 System adjustment 37
4.3 Discussions 42
4.4 MIDI 46
4.5 SONAR 48
Chapter 5. Experimental Results 51
5.1 Environment 51
5.2 Experimental Results 51
Chapter 6. Conclusions and Future Works 54
6.1 Conclusions 54
6.2 Future Works 54
References 58
參考文獻 [1] Pedro Neto, Dário Pereira, and J. Norberto Pires, Member, IEEE and A. Paulo Moreira, Member, IEEE, “Real-Time and Continuous Hand Gesture Spotting: an Approach Based on Artificial Neural Networks,” 2013 IEEE International Conference on Robotics and Automation (ICRA) Karlsruhe, Germany, May 6-10, 2013.
[2] Deyou Xu, “A Neural Network Approach for Hand Gesture Recognition in Virtual Reality Driving Training System of SPG,” The 18th International Conference on Pattern Recognition, Artillery Academy at Nanjing, China, 2006.
[3] G.R.S. Murthy, R.S. Jadon, Department of Computer Applications Madhav Institute of Technology and Science, Gwalior, M.P. India, “Hand Gesture Recognition using Neural Networks,” 2010 IEEE 2nd International Advance Computing Conference, 2010.
[4] Yan Wen, Chuanyan Hu, Guanghui Yu, and Changbo Wang, “A Robust Method of Detecting Hand Gestures Using Depth Sensors,” Haptic Audio Visual Environments and Games (HAVE), 2012 IEEE, pp.72-77, 2012.
[5] Rajesh Mapari, and Dr. Govind Kharat, “Hand Gesture Recognition using Neural Network,” International Journal of Computer Science and Network (IJCSN) Volume 1, Issue 6, December 2012.
[6] Jos´e Manuel Palacios, and Carlos Sag¨u´es, ”Human-Computer Interaction Based on Hand Gestures Using RGB-D Sensors ”, Sensors 2013.
[7] Sharad Vikram, and Lei Li, “Handwriting and Gestures in the Air, Recognizing on the Fly,” CHI 2013 Extended Abstracts, Paris, France, April 27, May 2, 2013.
[8] Foti Coleca, Andreea State, Sascha Klement, Erhardt Barth, Thomas Martinetz, “Self-organizing maps for hand and full body tracking”, Neurocomputing147 (2015)174–184.
[9] Trong-Nguyen Nguyen, Duc-Hoang Vo, Huu-Hung Huynh, and Jean Meunier, “Geometry-based Static Hand Gesture Recognition using Support Vector Machine”, 2014 13th International Conference on Control, Automation, Robotics & Vision Marina Bay Sands, Singapore, 10-12th December 2014 (ICARCV 2014)
[10] M.P. Paulraj, S. Yaacob, M.S. bin Zanar Azalan, and R. Palaniappan,“A phoneme based sign language recognition system using skin color segmentation,” IEEE 6th International Colloquium on Signal Processing and Its Applications (CSPA), pp.1-5, 2010.
[11] X. Wen and Y . Niu, “A Method for Hand Gesture Recognition Based on Morphology and Fingertip-Angle”, The 2nd International Conference on Computer and Automation Engineering (ICCAE), vol. 1, pp.688-691,2010.
[12] M. Kawulok, J. Kawulok, and J. Nalepa, “Spatial based skin detection using discriminative skin presence features”, Pattern Recogn. Lett., 2013.
[13] Mokhtar M. Hasan, and Pramoud K. Mirsa, “Brightness Factor Matching For Gesture Recognition System Using Scaled Normalization”, International Journal of Computer Science & Information Technology (IJCSIT), vol. 3(2), 2011.
[14] M. Andersen, T. Jensen, P. Lisouski, A. Mortensen, M. Hansen, T. Gregersen, and P. Ahrendt, “Kinect Depth Sensor Evaluation for Computer Vision Applications”, Technical Report ECE-TR-6, Department of Engineering Electrical and Computer Engineering, Aarhus University, Denmark, 2012.
[15] J. Lambrecht and J. Krüger, “Spatial programming for industrial robots based on gestures and augmented reality”, in Proc. 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Portugal, 2012, pp. 466–472.
[16] D. Kelly, J. Mc Donald, and C. Markham, “Weakly supervised training of a sign language recognition system using multiple instance learning density matrices”, IEEE Trans. Systems, Man Cybernetics–Part B, vol. 41, no. 2, pp. 526–541, 2011.
[17] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from single depth images”, in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, june 2011, pp. 1297 –1304.
[18] V. Frati and D. Prattichizzo, “Using kinect for hand tracking and rendering in wearable haptics”, in World Haptics Conference (WHC), 2011 IEEE, june 2011, pp. 317 –321.
[19] M. Tang, “Recognizing hand gestures with microsoft’s kinect”, Technical Report of Department of Electrical Engineering, Stanford University, March 2011.
[20] D. Xu, Y.L. Chen, C. Lin, X. Kong, and X. Wu, “Real-Time Dynamic Gesture Recognition System Based on Depth Perception for Robot Navigation”, In Proceedings of the IEEE International Conference on Robotics and Biomimetics, Guangzhou, China, 11–14 December 2012; pp. 689–694.
[21] Z. Zafrulla, H. Brashear, T. Starner, H. Hamilton, and P. Presti, “American sign language recognition with the Kinect”, In Proceedings of the 13th International Conference on Multimodal Interfaces, Alicante, Spain, 14–18 November 2011; pp. 279–286.
[22] Y. Wen, C. Hu, G. Yu, and C. Wang, “A Robust Method of Detecting Hand Gestures Using Depth Sensors”, In Proceedings of the 2012 IEEE International Workshop on Haptic Audio Visual Environments and Games (HAVE), Munich, Germany, 8–9 October 2012; pp. 72–77.
[23] Timothy K. Shih, “Spider King: Virtual Musical Instruments based on Microsoft Kinect”, The 6th IEEE International Conference on Ubi-Media Computing, 2013, Aizu-Wakamatsu, Japan, November 2-4.
[24] Iason Oikonomidis, Nikolaos Kyriazis, and Antonis A. Argyros, “Efficient model-based 3D tracking of hand articulations using Kinect”, in Proceedings of the 22nd British Machine Vision Conference (BMVC) , 2011, University of Dundee, UK, Aug. 29-Sep.1.
[25] J.L. Raheja, A. Chaudhary, and K. Singal, “Tracking of Fingertips and Centers of Palm Using KINECT”, Computational Intelligence, Modelling and Simulation (CIMSiM), 2011, pp. 248-252.
[26] 蘇木春,張孝德 (2004),機器學習:類神經網路、模糊系統以及基因演算法則(修訂二版),台灣,全華圖書。
指導教授 施國琛(Timothy K. Shih) 審核日期 2015-7-23
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明