博碩士論文 105522085 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:36 、訪客IP:18.225.209.95
姓名 張湘菱(Hsiang-Ling Chang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於智慧眼鏡之擴增實境輔助系統
(A Smart-Glasses-based Augmented Reality Assisted System)
相關論文
★ 以Q-學習法為基礎之群體智慧演算法及其應用★ 發展遲緩兒童之復健系統研製
★ 從認知風格角度比較教師評量與同儕互評之差異:從英語寫作到遊戲製作★ 基於檢驗數值的糖尿病腎病變預測模型
★ 模糊類神經網路為架構之遙測影像分類器設計★ 複合式群聚演算法
★ 身心障礙者輔具之研製★ 指紋分類器之研究
★ 背光影像補償及色彩減量之研究★ 類神經網路於營利事業所得稅選案之應用
★ 一個新的線上學習系統及其於稅務選案上之應用★ 人眼追蹤系統及其於人機介面之應用
★ 結合群體智慧與自我組織映射圖的資料視覺化研究★ 追瞳系統之研發於身障者之人機介面應用
★ 以類免疫系統為基礎之線上學習類神經模糊系統及其應用★ 基因演算法於語音聲紋解攪拌之應用
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近年來,擴增實境蓬勃發展,隨著智慧眼鏡的問世,更是讓這項技術為生活帶來許多的便利性。本論文提出一套擴增實境輔助系統,將系統架設於智慧眼鏡上,讓使用者得以透過個人視角使用此系統。
本系統功能包含(1)使用一套簡易流程建立裝置物件資料集,採用Mask R-CNN方法辨識裝置上的物件類別與位置(2)從影像中擷取指向手勢,並分析指向,依照手指指向顯示物件資訊(3)使用校準物件分析虛擬輔助工具該顯示的角度。
本系統的研發目的在於提供一套可輔助技術人員訓練之系統,藉由指向手勢,即可顯示操作人員欲了解的物件資訊。根據系統的實驗顯示,物件偵測的辨識率達到95.5%;手勢偵測的Kappa值為0.93,且平均偵測到手勢的秒數為0.26秒,即使在不同光線下,指向分析的準確率也有79%,由此可證明本論文所使用的方法具有很好的可信度。
摘要(英) Recently, the application of augmented reality has become more and more prevalent. With the advent of smart-glasses, the related research of augmented reality has been grown vigorously. Therefore, this dissertation proposes an augmented reality assisted system which will be set up on the smart-glasses. By setting the system on the smart glasses, the user will be able to use the system in personal perspective.
There are three main features in the system.(1)With a set of simple procedure, the system will set up dataset of objects on the device. Moreover, the system can identify the object and its position on the device automatically by using the Mask R-CNN method.(2)By capturing pointing gesture from the image and analyzing the pointing direction, the system will display the object information according to the finger pointing direction.(3)Using the calibrating object to analyze the rotation angles of virtual tools.
The aim of this system is to provide a system that can assist technicians in training. With the finger pointing, the system can show the object information which the user wants to know on the smart-glasses immediately. According to the results of the experiments, the percentage of recognition of object detection is 95.5%, the Kappa value of recognition of gesture detection is 0.93, and the average time for detecting pointing gesture is 0.26 seconds. Furthermore, even under different light, the proportion of accuracy of the pointing analysis is up to 79%. Based on the results of the experiments, it was proved that the method which was applied in this dissertation is applicable.
關鍵字(中) ★ 擴增實境
★ 手勢偵測
★ 手指指向分析
關鍵字(英) ★ augmented reality
★ hand gesture recognition
★ finger-pointing analysis
論文目次 摘要 i
ABSTRACT ii
致謝 iv
目錄 v
圖目錄 vii
表目錄 x
第一章、緒論 1
1-1 研究動機 1
1-2 研究目的 2
1-3 論文架構 3
第二章、相關研究 4
2-1 擴增實境應用 4
2-2 物件偵測技術 7
2-3 Mask R-CNN 12
2-4 手指指向分析技術 15
2-4-1 手勢偵測技術 15
2-4-2 手指指向分析 17
第三章、研究方法 19
3-1 系統架構 19
3-1-1 硬體介紹 20
3-1-2 系統流程 21
3-2 裝置物件偵測 22
3-3 手指指向分析演算法 24
3-3-1 指向手勢擷取 24
3-3-2 手指指向分析 29
3-4 虛擬工具定位及旋轉角度 36
3-4-1 虛擬工具定位 36
3-4-2 虛擬工具旋轉角度 37
第四章、實驗設計與結果 40
4-1 引擎室物件偵測實驗 40
4-1-1 實驗設計 40
4-1-2 實驗評估方式 42
4-1-3 實驗結果 42
4-1-4 實驗分析 43
4-2 主機板物件偵測實驗 46
4-2-1 實驗設計 46
4-2-2 實驗結果與分析 48
4-3 偵測手勢所需時間實驗 49
4-3-1 實驗設計 49
4-3-2 實驗評估方式 50
4-3-3 手勢辨識率與花費時間實驗結果 51
4-3-4 手勢偵測實驗結果分析 55
4-4 不同光線程度之手勢偵測所需時間實驗 56
4-4-1 實驗設計 56
4-4-2 手勢辨識率與花費時間實驗結果 58
4-4-3 手勢偵測實驗結果分析 62
4-5 手勢與非手勢偵測實驗 63
4-5-1 實驗設計 63
4-5-2 實驗評估指標 65
4-5-3 實驗結果與分析 67
4-6 手指指向分析實驗 70
4-6-1 實驗設計 70
4-6-2 指向手勢擷取實驗與分析 71
4-6-3 指向判斷實驗與分析 75
4-6-4 指向分析準確率實驗與分析 79
4-6-5 手指指向分析結論 83
4-7 旋轉角度實驗 84
第五章、結論與未來展望 85
5-1 結論 85
5-2 未來展望 86
參考文獻 87
參考文獻 [1] R. T. Azuma, "A survey of augmented reality," Teleoperators and Virtual Environments, vol. 6, no. 4, pp. 355-385, 1997.
[2] G. A. Lee, A. Dunser, S. Kim, and M. Billinghurst, "CityViewAR: A mobile outdoor AR application for city visualization," in IEEE International Symposium on Mixed and Augmented Reality - Arts, Media, and Humanities, 2012.
[3] 巫穎彤, “有助發展遲緩兒童學習之擴增實境互動系統之研製,” 國立中央大學軟體工程研究所碩士論文, 2017.
[4] S. J. Henderson and S. Feiner, "Evaluating the benefits of augmented reality for task localization in maintenance of an armored personnel carrier turret," in International Symposium on Mixed and Augmented Reality, 2009.
[5] V. G. Bellile, S. Bourgeois, M. Tamaazousti, S. N. Collette, and S. Knodel, "A mobile markerless Augmented Reality system for the automotive field," in IEEE ISMAR 2012 workshop on tracking methods and applications, 2012.
[6] J. Limaab, R. Robertoa, F. Simoesa, M. Almeidaa, L. Figueiredoa, J. M. Teixeiraab, and V. Teichrieba, "Markerless tracking system for augmented reality in the automotive industry," Expert Systems with Applications, vol. 82, pp. 100-114, 2017.
[7] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation," in IEEE Conference on Computer Vision and Pattern Recognition, 2014.
[8] R. Girshick, "Fast R-CNN," in IEEE International Conference on Computer Vision, 2015.
[9] S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, 2016.
[10] K. He, G. Gkioxari, P. Dollar, and R. Girshick, "Mask R-CNN," in IEEE International Conference on Computer Vision , 2017.
[11] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," in IEEE Conference on Computer Vision and Pattern Recognition, 2016.
[12] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, "SSD: Single Shot MultiBox Detector," in Computer Vision and Pattern Recognition, 2016.
[13] K. E. A. van de Sande, J. R. R. Uijlings, T. Gevers, and A. W. M. Smeulders, "Segmentation as selective search for object recognition," in International Conference on Computer Vision, 2011.
[14] T. Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature Pyramid Networks for Object Detection," in Computer Vision and Pattern Recognition, 2017.
[15] J. Long, E. Shelhamer, and T. Darrell, "Fully Convolutional Networks for Semantic Segmentation," in IEEE Conference on Computer Vision and Pattern Recognition, 2015.
[16] M. Kolsch, M. Turk, and T. Hollerer, "Vision-based interfaces for mobility," in The First Annual International Conference on Mobile and Ubiquitous Systems: Networking and Services, 2004.
[17] 羅冠中, “即時手勢辨識系統及其於戰場情資顯示平台之應用,” 國立中央大學資訊工程學系碩士論文, 2015.
[18] N. H. Dardas and N. D. Georganas, "Real-Time Hand Gesture Detection and Recognition Using Bag-of-Features and Support Vector Machine Techniques," IEEE Transactions on Instrumentation and Measurement, vol. 60, no. 11, pp. 3592-3607, 2011.
[19] H. Asano, T. Nagayasu, T. Orimo, K. Terabayashi, M. Ohta, and K. Umeda, "Recognition of finger-pointing direction using color clustering and image segmentation," in The SICE Annual Conference, 2013.
[20] D. Lee and S. Lee, "Vision?Based Finger Action Recognition by Angle Detection and Contour Analysis," Electronics and Telecommunications Research Institute, vol. 33, no. 3, pp. 415-422, 2011.
[21] E. Tamura, Y. Yamashita, Y. Ho, E. Sato-Shimokawara, T. Nishitani, and T. Yamaguchi, "Wearable finger motion input interface system with GMM foreground segmentation," in Conference on Technologies and Applications of Artificial Intelligence, 2015.
[22] "EPSON BT-300智慧眼鏡官方網站," [Online]. Available: https://www.epson.com.tw/家用系列/智慧穿戴裝置/智慧眼鏡/BT-300/p/V11H756054. [Accessed 17 - May - 2018].
[23] "Labelme: Image Polygonal Annotation with Python," [Online]. Available: https://github.com/wkentaro/labelme. [Accessed 6 - Jun - 2018].
[24] S. Kolkur, D. Kalbande, P. Shimpi, C. Bapat, and J. Jatakia, "Human Skin Detection Using RGB, HSV and YCbCr Color Models," in Computer Vision and Pattern Recognition, 2017.
[25] "Unity Assert Store - Workplace Tools," [Online]. Available: https://assetstore.unity.com/packages/3d/workplace-tools-86242. [Accessed 19 - Jun - 2018].
[26] "OpenCV," [Online]. Available: https://opencv.org/. [Accessed 6 - Jun - 2018].
指導教授 蘇木春(Mu-Chun Su) 審核日期 2018-8-20
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明