博碩士論文 993211003 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:36 、訪客IP:18.119.102.149
姓名 林煒傑(Wei-chieh Lin)  查詢紙本館藏   畢業系所 生物醫學工程研究所
論文名稱 以視覺為基礎之盲人導航系統
(An Imaged-based Navigation System for the Blind)
相關論文
★ 以Q-學習法為基礎之群體智慧演算法及其應用★ 發展遲緩兒童之復健系統研製
★ 從認知風格角度比較教師評量與同儕互評之差異:從英語寫作到遊戲製作★ 基於檢驗數值的糖尿病腎病變預測模型
★ 基於類神經網路之白血球分類系統★ 模糊類神經網路為架構之遙測影像分類器設計
★ 複合式群聚演算法★ 身心障礙者輔具之研製
★ 指紋分類器之研究★ 背光影像補償及色彩減量之研究
★ 類神經網路於營利事業所得稅選案之應用★ 一個新的線上學習系統及其於稅務選案上之應用
★ 人眼追蹤系統及其於人機介面之應用★ 結合群體智慧與自我組織映射圖的資料視覺化研究
★ 追瞳系統之研發於身障者之人機介面應用★ 以類免疫系統為基礎之線上學習類神經模糊系統及其應用
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本論文中利用Kinect感應器,來建立盲人行走輔助系統。系統提供四種模式,行走模式、探索模式、定位模式和過馬路模式。(1) 行走模式: 用感應器提供的深度資訊,藉由路面偵測演算法,並將深度轉為三維空間資訊,建立出環境資訊,偵測障礙物。(2) 探索模式:由感應器的色彩資訊,用SURF (Speeded-Up Robust Feature) 偵測特徵點和追蹤,使用最小平方誤差法計算出影像之間特徵點空間座標轉換矩陣,並建立平面地圖和地標特徵資訊。(3) 定位模式:從地圖中搜尋目前所在的位置座標和方向。(4) 過馬路模式:利用彩色影像來偵測行人穿越道的位置和路寬。
本論文提出用路面偵測演算法將使用者行走之路面切割出來,並即時的告知使用者障礙物的位置。提供室內地圖資料庫的建立、管理和定位的方式,讓視障人士可以知道目前在於室內的位置和方向。我們也提供行人穿越道偵測,協助視障人士安全的通過馬路。並且使用語音辨識來控制系統模式的切換,和直覺的語音提示告知使用者環境資訊。希望藉由此系統和搭配白手杖的使用,即能令目前導盲的輔具化被動為主動,讓視障人士的行動更自由。
摘要(英) In this thesis, we use the Kinect sensor to establish a navigation system for the blind. The systems provide four modes, walk mode, exploring mode, positioning mode and cross road mode. (1) Walking mode: The system uses depth information and floor detection algorithms to build environmental information and detect obstacles. (2) Exploring modes: The system detects SURF (Speeded-Up Robust Feature) feature points and tracking feature points by the color information from the sensor. We calculate the coordinate transformation matrix between two images and create a map by the method of least-square. (3) Positioning mode: Search the current location coordinates and direction in the map. (4) Crossing road mode: The system uses color images to detect the crosswalk location and width of the road.
In this thesis, we proposed that the floor detection algorithm to segment the floor region from depth information, and real-time tell the blind obstacle position. And we provide indoor map database to establish, manage and position method. The blind people can know the position and orientation in indoor. We also provide a crosswalk detection to help the blind safety cross road. And we use speech recognition to control the system mode. The system will use the voice to tell who the blind surrounding environment information. With the information about the environment the blind will have less fear in walking through unfamiliar environments via white canes.
關鍵字(中) ★ 定位
★ 路面偵測
★ 導航
★ 障礙物偵測
★ 行人穿越道偵測
★ 視覺障礙者
關鍵字(英) ★ crosswalk detection
★ floor detection
★ obstacle detection
★ positioning
★ navigation
★ blind
論文目次 目錄
中文摘要 i
ABSTRACT iii
致謝 v
目錄 vi
圖目錄List of Figures x
表目錄List of Tables xiv
第一章、 緒論 1
1-1 研究動機 1
1-2 研究目的 2
1-3 論文架構 3
第二章、 相關研究 4
2-1 導盲輔具相關介紹 4
2-1-1電子式行進輔具 4
2-1-2引導式機器人 5
2-1-3穿戴式輔具 6
2-1-4導引式手杖 7
2-1-5 機器導盲犬 8
2-1-6電子晶片或人工視網膜植入 9
2-1-7導盲輔具之探討 9
2-2空間定位技術 11
2-3同步定位與地圖建置技術 12
2-4行人穿越道偵測技術 14
2-5 Kinect介紹 15
2-5-1設備規格 16
2-5-2系統需求 17
2-5-3 Kinect相關應用 18
第三章、 研究方法與步驟 20
3-1 系統架構 20
3-2 行走模式 21
3-2-1 路面偵測 22
3-2-2座標系轉換 27
3-2-3障礙物偵測 29
3-3 探索模式 32
3-3-1特徵點偵測與追蹤 32
3-3-2座標轉換矩陣計算 35
3-3-3建立地標資訊 38
3-4 定位模式 41
3-4-1地圖搜索 41
3-4-2地圖定位 43
3-5 過馬路模式 44
3-5-1行人穿越道偵測 46
3-5-2行人穿越道方向與路寬 49
第四章、 人機介面與應用 50
4-1 系統環境 50
4-2 語音系統 51
4-3 系統操作流程 52
第五章、 實驗設計與結果 55
5-1 行走模式實驗 55
5-1-1路面切割與閥值設定實驗 55
5-1-2行走模式實驗討論 59
5-2 探索模式實驗 60
5-2-1直行測試實驗 61
5-2-2旋轉角度實驗 62
5-2-3實際地圖建立測試 64
5-2-4探索模式實驗討論 66
5-3 定位模式實驗 67
5-3-1地點定位實驗 67
5-3-2定位模式實驗討論 68
5-4 過馬路模式實驗 68
5-4-1行人穿越道偵測實驗 68
5-4-2過馬路模式實驗討論 70
第六章、 結論與未來展望 73
6-1 結論 73
6-2 未來展望 74
參考文獻 76
附錄一、行人穿越道測試資料集 83
參考文獻 參考文獻
[1] SOUND Foresight Ltd, [Online] Available: http://www.soundforesight.co.uk/index.html Jun. 9, 2012[data accessed]
[2] Currently Available Electronic Travel Aids for the Blind, [Online] Available:
http://www.noogenesis.com/eta/current.html Jun. 9, 2012[data accessed]
[3] K. T. Song and H. T. Chen, “Cooperative Map Building of Multiple Mobile Robots,” in 6th International Conference on Mechatronics Technology, pp.535-540, Kitakyushu, Japan, Sep. 29-Oct. 3, 2002.
[4] K. T. Song and C. Y. Lin, “Mobile Robot Control Using Stereo Vision,” in Proc. of 2001 ROC Automatic Control Conference, pp. 384-389, 2001.
[5] S. T. Tseng and K. T. Song, “Real-time Image Tracking for Traffic Monitoring,” in Proc. of the IEEE 5th International Conference on Intelligent Transportation Systems, pp. 1-6, Singapore, Sep. 3-6, 2002.
[6] S. Tachi and K. Komority, “Guide dog robot,” 2nd Int. Congress on Robotics Research, pp. 333-340, Kyoto, Japan, 1984.
[7] S. Shoval, J. Borenstein, and Y. Koren, “Mobile Robot Obstacle Avoidance in a Computerized Travel Aid for the Blind,” in Proc. of the 1994 IEEE International Conference on Robotics and Automation, pp. 2023-2029, San Diego, CA, May 8-13, 1994.
[8] S. Shoval, J. Borenstein, and Y. Koren, “Mobile Robot Obstacle Avoidance in a Computerized Travel Aid for the Blind,” in Proc. of the 1994 IEEE International Conference on Robotics and Automation, pp. 2023-2029, San Diego, CA, May 8-13, 1994.
[9] J. Borenstein, “The NavBelt–A Computerized Multi-Sensor Travel Aid for Active Guidance of the Blind,” in Proc. of the Fifth Annual CSUN Conference on Technology and Persons With Disabilities, pp. 107-116, Los Angeles, California, March 21-24, 1990.
[10] S. Shoval, J. Borenstein, and Y. Koren, “The Navbelt - A Computerized Travel Aid for the Blind,” in Proc. of the RESNA ’’93 conference, pp. 240-242, Las Vegas, Nevada, June 13-18, 1993.
[11] S. Shoval and J. Borenstein, “The NavBelt – A Computerized Travel Aid for the Blind on Mobile Robotics Technology,” IEEE Transactions on Biomedical Engineering, vol. 45, no. 11, pp.107-116, Nov. 1998.
[12] J. Borenstein and I. Ulrich, “The GuideCane - A Computerized Travel Aid for the Active Guidance of Blind Pedestrians,” in Proc. of the IEEE International Conference on Robotics and Automation, pp. 1283-1288, Albuquerque, NM, April 21-27, 1997.
[13] I. Ulrich and J. Borenstein, “The GuideCane - Applying Mobile Robot Technologies to Assist the Visually Impaired,” IEEE Trans. on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 31, no. 2, pp. 131-136, Mar. 2001.
[14] 全球無障礙資訊網, [Online] Available: http://www.batol.net/batol-help/article-summary.asp Jun. 9, 2012[data accessed]
[15] J. Hancock, M. Hebert, and C. Thorpe, “Laser intensity-based obstacle detection Intelligent Robots and Systems, ” in IEEE/RSJ International Conference on Intelligent Robotic Systems, vol. 3, pp. 1541-1546, 1998.
[16] C. Harris and M. Stephens, “A combined corner and edge detector,” in Proc. of the 4th Alvey Vision Conference, pp. 147-151, 1988.
[17] R. Hartley and P. Sturm, “Triangulation,” Computer Vision and Image Understanding, vol. 68, no 2, pp. 146-157, 1997.
[18] B. Heisele and W. Ritter, “Obstacle detection based on color blob flow,” in Proc. Intelligent Vehicles Symposium 1995, pp. 282-286, Detroit, 1995.
[19] W. Kruger, W. Enkelmann, and S. Rossle, “Real-time estimation and tracking of optical flow vectors for obstacle detection,” in Proc. of the Intelligent Vehicles Symposium, pp. 304-309, Detroit, 1995.
[20] M. Bertozzi and A. Broggi, “GOLD: A Parallel Real-Time Stereo Vision System for Generic Obstacle and Lane Detection,” IEEE Trans. on Image Processing, vol.7, no.1, pp. 62-81, 1998.
[21] Q.-T. Luong, J. Weber, D. Koller, and J. Malik, “An integrated stereo-based approach to automatic vehicle guidance,” in 5th International Conference on Computer Vision, pp. 52-57, June 1995.
[22] N. Ayache and F. Lustman, “Trinocular Stereo Vision for Robotics,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.13, no.1, pp. 73-85, 1991.
[23] H. Ishiguro and S. Tsuji, “Active Vision By Multiple Visual Agents,” in Proc. of the 1992 IEEE/RSJ International Conference on Intelligent Vehicles, vol.3, pp. 2195-2202, 1992.
[24] M. K. Leung, Y. Liu, and T. S. Huang, “Estimating 3d vehicle motion in an outdoor scene from monocular and stereo image sequences,” in Proc. of the IEEE Workshop on Visual Motion, pp. 62-68, 1991.
[25] L. M. Lorigo, R. A. Brooks, and W. E. L. Grimsou, “Visually-Guided Obstacle Avoidance in Unstructured Environments,” in IEEE Conference on Intelligent Robots and Systems, pp. 373-379, Sep. 1997.
[26] J. Hightower and G. Borriello, “Location systems for ubiquitous computing,” IEEE Computer, vol. 32, pp. 57-66, Aug. 2001.
[27] M. Mauve, A. Widmer, and H. Hartenstein, “A survey on position-based routing in mobile ad hoc networks,” IEEE Network, vol. 15, issue 6, pp. 30-39, Nov.-Dec, 2001.
[28] G. Sun, J. Chen, W. Guo, and K. J. R. Liu, “Signal processing techniques in network-aided positioning: a survey of state-of-the-art positioning designs,” IEEE Signal Processing Magazine, vol. 22, issue 4, pp. 12-23, July 2005.
[29] M. S. Uddin and T. Shioyama, “Detection of Pedestrian Crossing Using Bipolarity Feature—An Image-Based Technique,” IEEE Trans. on Intelligent Transportation Systems, vol. 6, no. 4, pp. 439-445, Dec 2005.
[30] T. Shioyama, H. Wu, N. Nakamura, and S. Kitawaki, “Measurement of the length of pedestrian crossings and detection of traffic lights from image data,” Meas. Sci. Technol, vol. 13, no. 9, pp. 1450–1457, Sep. 2002.
[31] S. Se, “Zebra-crossing detection for the partially sighted,” in Proc. Computer Vision and Pattern Recognition, pp. 211–217, 2000.
[32] V. Ivanchenko, J. Coughlan, and H. Shen, “Crosswatch: a camera phone system for orienting visually impaired pedestrians at traffic intersections,” in 11th International Conference on Computers Helping People with Special Needs, Linz, Austria, July 2008.
[33] V. Ivanchenko, J. Coughlan, and H. Shen, “Detecting and Locating Crosswalks using a Camera Phone,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008.
[34] Kinect for Windows, [Online] Available: http://www.microsoft.com/en-us/kinectforwindows/ Jun. 9, 2012[data accessed]
[35] Kinect for Windows SDK, [Online] Available: http://msdn.microsoft.com/en-us/library/hh855347 Jun. 9, 2012[data accessed]
[36] Microsoft Kinect somatosensory game device full disassembly report _Microsoft XBOX, [Online] Available: http://www.waybeta.com/news/58230/microsoft-kinect-somatosensory-gamedevice-full-disassembly-report-_microsoft-xbox Jun. 9, 2012[data accessed]
[37] L. Gallo, A. P. Placitelli, and M. Ciampi, “Controller-free exploration of medical image data: Experiencing the Kinect,” in Proc. of the 24th IEEE International Symposium on Computer-Based Medical Systems, Los Alamitos, CA, June 27–30, 2011.
[38] I. Oikonomidis, N. Kyriazis, and A. Argyros, “Efficient model-based 3d tracking of hand articulations using kinect,” in Br. Mach. Vis. Conf., Aug, vol. 2, 2011.
[39] Kinect Enabled Autonomous Mini Robot Car Navigation, [Online] Available:
http://www.ubergizmo.com/2010/12/kinect-enabled-autonomous-mini-robot-car-navigation Jun. 9, 2012 [data accessed]
[40] 孫中麒,「低價位之導盲系統」,國立中央大學資訊工程研究所碩士論文,民國九十四年。
[41] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),” Computer Vision and Image Understanding, vol.110, no.3, pp.346-359, June, 2008.
[42] C. Harris and M. Stephens, “A combined corner and edge detector,” in Proc. of the Alvey Vision Conference, pp. 147 -151, 1988.
[43] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 2, no. 60, pp. 91-110, 2004.
[44] K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-Squares Fitting of Two 3-D Point Sets,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 9, no. 5, pp. 698-700, 1987.
[45] Determining yaw, pitch, and roll from a rotation matrix, [Online] Available:
http://planning.cs.uiuc.edu/node103.html Jun. 15, 2012 [data accessed]
[46] 吳成柯、戴善榮、程湘君、雲立實,數位影像處理,儒林出版社,台北,民國八十二年。
[47] N. Otsu, “A threshold selection method from gray-level histogram,” IEEE Trans. Syst. Man Cybern, vol. 9, pp.62-66, 1979.
[48] M. C. Su, Y. Z. Hsieh, and Y. X. Zhao, “A Simple Approach to Stereo Matching and Its Application in Developing a Travel Aid for the Blind,” in The 11th International Conf. on Fuzzy Theory and Technology, pp.1228-1231, Kaohsiung, Taiwan, Oct. 8-11, 2006
[49] M. C. Su, Y. Z. Hsieh, D. Y. Huang, Y. X. Zhao, and C. C. Sun, “A Vision-Based Travel Aid for the Blind, ” in Pattern Recognition Theory and Application, E. A. Zoeller Eds, pp. 73-89, Nova Science Publishers, New York, 2008.
[50] 導盲機械犬, [Online] Available:
http://www.robonable.jp/news/2011/10/nsk-1024.html Jul. 6, 2012 [data accessed]
[51] S. Thrun , W. Burgard, and D. Fox, Probabilistic Robotics, The MIT Press, 2005
指導教授 蘇木春(Mu-chun Su) 審核日期 2012-7-25
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明