博碩士論文 104522048 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:80 、訪客IP:18.218.6.36
姓名 林威任(Wei-Jen Lin)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 個人型之盲人引導輔助系統
(A Personal Guidance Aid System for The Blind)
相關論文
★ 以Q-學習法為基礎之群體智慧演算法及其應用★ 發展遲緩兒童之復健系統研製
★ 從認知風格角度比較教師評量與同儕互評之差異:從英語寫作到遊戲製作★ 基於檢驗數值的糖尿病腎病變預測模型
★ 模糊類神經網路為架構之遙測影像分類器設計★ 複合式群聚演算法
★ 身心障礙者輔具之研製★ 指紋分類器之研究
★ 背光影像補償及色彩減量之研究★ 類神經網路於營利事業所得稅選案之應用
★ 一個新的線上學習系統及其於稅務選案上之應用★ 人眼追蹤系統及其於人機介面之應用
★ 結合群體智慧與自我組織映射圖的資料視覺化研究★ 追瞳系統之研發於身障者之人機介面應用
★ 以類免疫系統為基礎之線上學習類神經模糊系統及其應用★ 基因演算法於語音聲紋解攪拌之應用
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 相較於其他感官障礙者,視覺障礙者在生活自理上有更多的不方便,且相關的輔助工具進步的非常緩慢,像是白手杖已經使用超過100年了,地位依然屹立不搖,而導盲犬普及也是受到重重的障礙,因此,這幾年來使用電腦視覺的方式增加視障者生活自理能力的相關研究越來越蓬勃發展。
本論文使用Kinect深度攝影機,建立視障者生活輔助的系統,系統主要有兩大功能。(1)利用Kinect提供的路面法向量,配合侵蝕與擴張演算法,重建路面資訊,進而偵測路面上的資訊,如路長、路面上障礙物等,提供視障者行走時能多一分路面的資訊。(2)輔助視障者尋找日常生活用品,首先利用卷積神經網路對日常生活中的物品建立辨識的模型,實際應用時,利用畫面切割與統計方式找出物品在彩色影像中的位置,以提供視障者能快速找到物品。
本系統所使用之路面切割偵測演算法與實際大約只差1.2%,能有效切割出屬於路面的區域,並且補足Kinect深度資訊誤差所造成的影響。物件偵測的三種情境各準備一萬張影像,最壞的情況也只有不到5%的機率找不到物品,足以證明本論文所使用之方法能有效地找出目標物品。
摘要(英)
Compared to other disabilities, people with visual impairment are more inconvenient to take care themselves. However, the development in blindness guidance aids is slow, for example, white cane has been used for more than a hundred years, but there appears no new aids can completely replace it. Guide dogs are another option, but it is not easy to use widely. Therefore, more and more researches of aid system using computer vision have been published.
In this paper, we use Kinect as the depth sensor to build a guidance aid system for blinds. The system has two features. (1) Rebuild the path information basing on the normal vector provided by Kinect with the algorithm of erosion and dilation, which could retrieve the length of road and the height of obstacle (if any), helping the visually impaired to recognize the walking path. (2) Provide the assistance for the visually impaired to find daily necessities. First, our system trains identification model from convolutional neural networks, which will finally applied for the recognition in a series of segmented images, and get the item locations from the statistical results. That could help the visually impaired quickly find those items.
The accuracy of the floor extraction algorithm used in the system is about 98.8%, which means it could fix the errors read from Kinect in the depth information and could efficiently extraction the regions belong to the floor. For the object detection tests, we prepared ten thousand pictures for each of the three conditions, and we got only less than 5% of chances not to find the target object, which certainly proves that the method we used could efficiently identify the target object.
關鍵字(中) ★ Kinect
★ 視覺障礙者
★ 卷積神經網路
★ 物品偵測
關鍵字(英) ★ Kinect
★ visual impairments
★ convolutional neural networks
★ item detection
論文目次
摘要............................................i
ABSTRACT.......................................ii
誌謝...........................................iv
目錄............................................v
圖目錄........................................vii
表目錄.........................................ix
第一章、 緒論...................................1
1-1 研究動機................................1
1-2 研究目的................................2
1-3 論文架構................................3
第二章、 相關研究...............................4
2-1 導盲輔具介紹............................4
2-1-1 電子式行進輔具..........................4
2-1-2 導盲機器人..............................6
2-1-3 穿戴式導盲裝置..........................9
2-1-4 機器導盲手杖...........................10
2-1-5 人工視網膜.............................11
2-1-6 手機導盲...............................13
2-2 深度學習...............................14
2-2-1 卷積神經網路...........................14
2-2-2 深度學習開發工具.......................18
2-3 深度攝影機介紹.........................20
2-3-1 Intel RealSense SR300..................20
2-3-2 Intel RealSense R200...................21
2-3-3 ZED Stereo Camera......................22
2-3-4 Microsoft Kinect.......................23
第三章、 研究方法..............................25
3-1 系統架構...............................25
3-2 行走模式...............................26
3-2-1 路面偵測...............................27
3-2-2 路面資訊分析...........................31
3-2-3 障礙物偵測.............................34
3-3 尋找物品模式...........................36
3-3-1 物品偵測...............................37
3-3-2 物品最終判定...........................39
第四章、 實驗設計與結果........................42
4-1 行走模式實驗...........................42
4-1-1 路面切割實驗設計.......................42
4-1-2 路面切割實驗評估方式...................43
4-1-3 路面切割實驗結果.......................44
4-1-4 障礙物偵測實驗.........................49
4-2 尋找物品模式實驗.......................52
4-2-1 實驗評估方式...........................52
4-2-2 情境設計...............................54
4-2-3 卷積神經網路實驗.......................56
4-2-4 影像中尋找目標物品結果.................58
第五章 結論與未來展望..........................61
5-1 結論...................................61
5-2 未來展望...............................62
參考文獻.......................................63
參考文獻
[1] 身心障礙者服務資訊網. [Online]. Available: http://disable.yam.org.tw/node/5. [Accessed: 16-May-2017].
[2] 台灣導盲犬協會. [Online]. Available: http://p.udn.com.tw/upf/newmedia/2016_vist/03/20160304_noseedog/index.html. [Accessed: 17-May-2017].
[3] The Miniguide mobility aid. [Online]. Available: http://www.gdp-research.com.au/minig_1.htm. [Accessed: 30-May-2017].
[4] S. Shoval, I. Ulrich, and J. Borenstein, “NavBelt and the Guide-Cane [obstacle-avoidance systems for the blind and visually impaired],” IEEE Robotics & Automation Magazine, Vol. 10, pp. 9-20, 2003.
[5] I. Ulrich and J. Borenstein, “The GuideCane: Applying mobile robot technologies to assist the visually impaired,” IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, Vol. 31, pp. 131-136, 2001.
[6] NavBelt 介紹網站. [Online]. Available: http://www-personal.umich.edu/~johannb/navbelt.htm. [Accessed: 19-June-2017].
[7] GuideCane 介紹網站. [Online]. Available: http://www.arrickrobotics.com/robomenu/guidecan.html. [Accessed: 19-June-2017].
[8] 36Kr:研發電子導盲犬,Doogo想讓盲人出行更簡單. [Online]. Available: https://36kr.com/p/5057479.html. [Accessed: 19-June-2017].
[9] NSK Develops a Guide-Dog Style Robot. [Online]. Available: http://www.nskeurope.de/nsk-develops-a-guide-dog-style-1074.htm. [Accessed: 19-June-2017].
[10] Y. Wei and M. Lee, “A Guide-dog Robot System Research for the Visually Impaired,” IEEE International Conference on Industrial Technology, pp. 800-805, 2014
[11] Y. Wei, X. Kou, and M. C. Lee, “A new vision and navigation research for a guide-dog robot system in urban system,” IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 1290-1295, 2014.
[12] Y. Wei, X. Kou, and M. C. Lee, “Development of a guide-dog robot system for the visually impaired by using fuzzy logic based human-robot interaction approach,” International Conference on Control, Automation and Systems, pp. 136-141, 2013.
[13] C. Galindo, J. Gonzalez, and J. A. Fernandez-Madrigal, “Control architecture for human-robot integration: Application to a robotic wheelchair,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), Vol. 36, pp. 1053-1067, 2006.
[14] V. Kulyukin, C. Gharpure, J. Nicholson, and G. Osborne, “Robot-assisted wayfinding for the visually impaired in structured indoor environments,” Autonomous Robots, Vol. 21, pp. 29-41, 2006.
[15] Pivothead Eyewear 官方網站. [Online]. Available: http://www.Pivothead.com/local-live3/. [Accessed: 20-June-2017].
[16] SoftBank-Backed CloudMinds Aims To Create A Cloud Intelligence Ecosystem. [Online]. Available: https://www.chinamoneynetwork.com/2017/02/15/softbank-backed-cloudminds-aims-to-create-a-cloud-intelligence-ecosystem. [Accessed:20-June-2017].
[17] C. Ye, S. Hong, and X. Qian, “A co-robotic cane for blind navigation,” Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on, pp. 1082-1087, 2014.
[18] C. Ye, S. Hong, X. Qian, and W. Wu, “Co-Robotic Cane: A New Robotic Navigation Aid for the Visually Impaired,” IEEE Systems, Man, and Cybernetics Magazine, Vol. 2, pp. 33-42, 2016.
[19] Y. Hirahara, Y. Sakurai, Y. Shiidu, K. Yanashima, and K. Magatani, “Development of the navigation system for the visually impaired by using white cane,” Engineering in Medicine and Biology Society, 2006. EMBS ′06. 28th Annual International Conference of the IEEE, 2006.
[20] W. Uemura and T. Hayama, “A white cane mounted with infrared sensors to detect automatic doors,” Consumer Electronics - Berlin (ICCE-Berlin), 2015 IEEE 5th International Conference on, 2015.
[21] 網路文章. [Online]. Available: http://montanahan.blogspot.tw/2017/12/. [Accessed: 19-June-2017].
[22] Second Sight 官方網站. [Online]. Available: http://www.secondsight.com/system-overview-en.html. [Accessed: 19-June-2017].
[23] 導盲秘書拉一把(A Gentle Tug)之創新設計與技術介紹. [Online]. Available: https://www.artc.org.tw/chinese/03_service/03_02detail.aspx?pid=2232. [Accessed: 30-May-2017].
[24] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, Vol. 521, pp. 436-444, 2015.
[25] D. H. Hubel and T. N. Wiesel, “Receptive fields binocular interaction, and functional architecture in the cat’s visual cortex,” Journal of Physiology, Vol. 160, pp. 106-154, 1962.
[26] Y. Boureau, J. Ponce, and Y. LeCun, “A Theoretical Analysis of Feature Pooling in Visual Recognition,” Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111-118, 2010.
[27] Intel RealSense SR300官方網站. [Online]. Available: https:// software.intel.com/en-us/articles/introducing-the-intel-realsense-camera-sr300. [Accessed: 20-May-2017].
[28] Intel RealSense R200介紹文章. [Online]. Available: https:// software.intel.com/en-us/articles/realsense-r200-camera. [Accessed: 20-May-2017].
[29] ZED Stereo Camera官方網站. [Online]. Available: https://www.stereolabs.com/zed/specs/. [Accessed: 21-May-2017]
[30] Kinect維基百科. [Online]. Available: https://en.wikipedia.org/wiki/Kinect. [Accessed: 19-May-2017].
[31] Ramer-Douglas-Peucker algorithm. [Online]. Available: https://en.wikipedia.org/wiki/ Ramer–Douglas–Peucker_algorithm. [Accessed: 3-July-2017].
[32] 常態分佈維基百科. [Online]. Available: https://zh.wikipedia.org/wiki/正态分布. [Accessed: 14-June-2017].
[33] A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello, “ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation,” arXiv preprint arXiv:1606.02147, 2016.
指導教授 蘇木春(Mu-Chun Su) 審核日期 2017-8-15
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明