博碩士論文 110552011 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:117 、訪客IP:3.149.249.154
姓名 羅文岳(Wen-Yueh Lo)  查詢紙本館藏   畢業系所 資訊工程學系在職專班
論文名稱 使用者自定義手勢指令系統
(User-definable Gesture Command System)
相關論文
★ 整合GRAFCET虛擬機器的智慧型控制器開發平台★ 分散式工業電子看板網路系統設計與實作
★ 設計與實作一個基於雙攝影機視覺系統的雙點觸控螢幕★ 智慧型機器人的嵌入式計算平台
★ 一個即時移動物偵測與追蹤的嵌入式系統★ 一個固態硬碟的多處理器架構與分散式控制演算法
★ 基於立體視覺手勢辨識的人機互動系統★ 整合仿生智慧行為控制的機器人系統晶片設計
★ 嵌入式無線影像感測網路的設計與實作★ 以雙核心處理器為基礎之車牌辨識系統
★ 基於立體視覺的連續三維手勢辨識★ 微型、超低功耗無線感測網路控制器設計與硬體實作
★ 串流影像之即時人臉偵測、追蹤與辨識─嵌入式系統設計★ 一個快速立體視覺系統的嵌入式硬體設計
★ 即時連續影像接合系統設計與實作★ 基於雙核心平台的嵌入式步態辨識系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2029-1-11以後開放)
摘要(中) 手勢作為一種直觀的交流和溝通的方式,在人機互動中一直是探索的焦點。然而,目前手勢控制的相關產品只允許使用者使用特定的手勢進行操作,使不同文化習慣或不同地域的使用者增加學習成本,損失原本手勢自然、直觀且低學習成本的優勢。
本論文利用深度學習影像辨識技術,創建一個自定義手勢系統,讓使用者能夠根據個人偏好設置手勢指令,避免固有產品的局限。此系統的靈活性可解決因文化和地區差異帶來的手勢理解和使用上的混淆。
透過MIAT方法論的應用,我們模組化地設計了手勢辨識、指令設置和人機互動等功能的整合系統。利用深度學習模型替代傳統影像處理,提高了在複雜環境下的準確性和適應性。除了靜態手勢的辨識外,我們還實現了動態的手勢組合與追蹤,使得使用者能夠更自由地定義手勢指令,創造出數十種適合自己的手勢指令,最後我們實現了含有這些功能的圖形介面系統應用程式。
摘要(英) Gestures have long been a focal point in human-computer interaction, serving as an intuitive form of communication. However, current gesture-controlled products restrict users to specific gestures, amplifying the learning curve for individuals from diverse cultural or geographical backgrounds. Consequently, this limitation undermines the inherent advantages of natural, intuitive, and easily learned gestures.
This thesis capitalizes on deep learning image recognition technology to establish a customizable gesture system. This system enables users to configure gesture commands based on personal preferences, effectively bypassing the limitations of existing products. Its flexibility resolves confusion in gesture interpretation and usage arising from cultural and regional differences.
Utilizing the MIAT methodology, we modularly designed an integrated system that encompasses gesture recognition, command configuration, and human-computer interaction functionalities. By replacing traditional image processing with deep learning models, we have significantly enhanced accuracy and adaptability, particularly in complex environments. In addition to recognizing static gestures, we have achieved dynamic gesture combinations and tracking, empowering users to freely define gesture commands and create numerous personalized gestures. Finally, we created a graphical interface system application that incorporates these functionalities.
關鍵字(中) ★ 手勢辨識 關鍵字(英) ★ Gesture
論文目次 摘 要 i
Abstract ii
謝誌 iii
目錄 iv
圖目錄 vi
表目錄 viii
第一章、緒論 1
1.1 研究背景 1
1.2 研究目標 2
1.3 論文架構 3
第二章、技術回顧 4
2.1 基於手勢的人機互動技術 4
2.2手部偵測方法 5
2.3 手勢指令 8
第三章、系統設計 11
3.1 MIAT 系統設計方法論 11
3.2 手勢指令系統架構 15
第四章、系統實作與實驗 26
4.1 實驗環境 26
4.2 yolo模型訓練 26
4.3手勢辨識實驗 29
4.4系統整合驗證 34
第五章、自定義手勢應用系統開發 36
5.1 開發環境 36
5.2 自定義手勢應用系統實現 36
5.3 應用實驗系統 38
第六章、結論 42
6.1 結論 42
6.2 未來展望 42
參考文獻 44
參考文獻 [1] M.Oudah, A. A. Naji and J. Chah, “Hand Gesture Recognition Based on Computer Vision,” MDPI Applied Sciences, vol. 6, no. 8, pp. 1-29, 2020
[2] K. Kudrinko, E. Flavin,X. Zhu and Q. Li, “Wearable Sensor-Based Sign Language Recognition: A Comprehensive Review,” IEEE Reviews in Biomedical Engineering, vol. 14, no. 10, pp. 82–97, 2021.
[3] H. Di, Y. Li, K. Liu, L. An and J. DongHand, “Gesture Monitoring Using Fiber-Optic Curvature Sensors,” Applied Optics, vol. 58, no. 29, pp.7935-7942, 2019
[4] T.M. Tai, Y.-J. Jhang, Z.-W. Liao, K.-C. Teng and W.-J. Hwang, “Sensor-Based Continuous Hand Gesture Recognition by Long Short-Term Memory,” IEEE Sensors Letters, vol. 2, no. 3, pp. 1–4, 2018.
[5] Z. Zhang, Z. Tian, and M. Zhou, “Latern: Dynamic continuous hand gesture recognition using FMCW radar sensor,” IEEE Sensors Journal, vol. 18, no. 8, pp. 3278–3289, 2018.
[6] F. Al Farid, N. Hashim, J. Abdullah, Md R. Bhuiyan, W. N. S. Mohd Isa, J. Uddin, M. A. Haque and M. N. Husen,” A Structured and Methodological Review on Vision-Based Hand Gesture Recognition System,”MDPI Applied Sciences, vol. 8, no. 6, pp. 1-19, 2022
[7] S. Liaoa, G. Lia, H. Wua, D. Jianga, Y. Liua, J Yuna, Y. Liuaand and D. Zhoud, “Occlusion Gesture Recognition Based on Improved SSD,” Concurrency and Computation: Practice and Experience (CCPE), vol. 33, no. 6, pp. 1-30, 2021
[8] D. Ryumin, D. Ivanko and E. Ryumina,” Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices,”MDPI Applied Sciences, vol. 23, no. 4, pp. 1-29, 2023
[9] M. Maity, S. Banerjee and S. S. Chaudhuri, “Automate Appliances via Gestures Recognition for Elderly Living Assistance ,” IEEE International Conference on Advancements in Computational Sciences (ICACS), pp. 1–6, 2023.
[10] Y. Li, J. Huang, F. Tian, H.-A. Wang and G.-Z. Dai, “Gesture interaction in virtual reality”, Virtual Reality & Intelligent Hardware, vol. 1, no. 1, pp. 84–112, 2019.
[11] D. Patole, S. S. Gokharu, B. D. Baraskar and M. B. Hadawale, “LeapLearn: A Gesture-Based Game”, in 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), 2018, pp.1-4.
[12] W. Park, Using Hand Gestures Might Feel Like an Intuitive Way to Communicate Across Language Barriers, but Their Meaning Can Change, and There Are Few Universal Signs That Everyone Agrees On, 2021 [Online]. Available: https://www.bbc.com/future/article/20210818-the-hand-gestures-that-last-longer-than-spoken-languages
[13] W. Chen, C. Yu, C. Tu, Z. Lyu, J. Tang, S. Ou, Y. Fu and Z. Xue, “A Survey on Hand Pose Estimation with Wearable Sensors and Computer-Vision-Based Methods,” MDPI Sensors, vol. 20, no. 4, pp. 1-25, 2020
[14] H.Y. Chung, Y. L. Chung and W. F. Tsai, “An Efficient Hand Gesture Recognition System Based on Deep CNN ,” IEEE International Conference on Industrial Technology (ICIT), pp. 853–858, 2019.
[15] A. Mujahid, M. J. Awan, A. Yasin, M. A. Mohammed, R. Damaševicius, R. Maskeliunas and K. H. Abdulkareem, “Real-Time Hand Gesture Recognition Based on Deep Learning YOLOv3 Model,” MDPI Applied Sciences, vol. 11, no. 9, pp. 1-15, 2021
[16] S. Liao, G. Li, H. Wu, D. Jiang, Y. Liu, J. Yun, Y. Liu and D. Zhou, “Occlusion Gesture Recognition Based on Improved SSD,” Concurrency and Computation: Practice and Experience, vol. 33, no. 6, pp.1-30, 2020
[17] A. Haldera and A. Tayadeb, “Real-time Vernacular Sign Language Recognition using MediaPipe and Machine Learning,” International Journal of Research Publication and Reviews, vol. 2, no. 5, pp.9-17, 2021
[18] A. Bochkovskiy, C.-Y. Wang and H.-Y. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” arXiv:2207.02696, 2022. [Online]. Available: https://arxiv.org/pdf/2207.02696.pdf
[19] I. Gallo, A. U. Rehman, R. H. Dehkordi, N. Landro, R. L. Grassa and M. Boschetti,” Deep Object Detection of Crop Weeds: Performance of YOLOv7 on a Real Case Dataset from UAV Images,”MDPI Applied Sciences, vol. 15, no. 2, pp. 1-17, 2023
[20] M. Maity, S. Banerjee and S. S. Chaudhuri, “Faster R-CNN and YOLO based Vehicle detection: A Survey ,” IEEE International Conference on Computing Methodologies and Communication (ICCMC), pp. 1442–1447, 2021.
[21] A. Sharmaa, A. Mittala, S. Singha and V. Awatramania, ” Hand Gesture Recognition using Image Processing and Feature Extraction Techniques,” Smart Sustainable Intelligent Computing and Applications, vol. 173, pp. 181-190, 2020
[22] T. Manteco, C. R. del-Blanco, F. Jaureguizar and N. Garcı´a, A real-time gesture recognition system using near-infrared imagery, 2019. [Online]. Available: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0223320&type=printable
[23] Md. Z. Islam, M. S. Hossain, R. Islam and K. Andersson, “Static Hand Gesture Recognition using Convolutional Neural Network with Data Augmentation ,” IEEE International Conference on Informatics Electronics & Vision (ICIEV), pp. 324–329, 2019.
[24] N. S. Punn, S. K. Sonbhadra, S. Agarwal and G. Ra, Monitoring COVID-19 social distancing with person detection and tracking via fine-tuned YOLO v3 and Deepsort techniques , 2021. [Online]. Available: https://arxiv.org/pdf/2005.01385.pdf
[25] N. Wojke, A. Bewley and D. Paulus, Simple Online and Realtime Tracking with a Deep Association Metric , 2017. [Online]. Available: https://arxiv.org/pdf/1703.07402.pdf
[26] C. Li, A. Fahmy and J. Sienz,” An Augmented Reality Based Human-Robot Interaction Interface Using Kalman Filter Sensor Fusion,” MDPI Sensors, vol. 19, no. 20, pp. 1-17, 2019
[27] C.-H. Chen, M.-Y. Lin, and X.-C. Guo, “High-level modeling and synthesis of
smart sensor networks for Industrial Internet of Things,” Computers & Electrical Engineering, vol. 61, 2017, pp. 48-66.
指導教授 陳慶瀚(Ching-Han Chen) 審核日期 2024-1-22
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明