博碩士論文 93522019 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:136 、訪客IP:3.141.31.209
姓名 謝易錚(Yi-Zeng Hsieh)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 以立體視覺實作盲人輔具系統
(A Stereo-Vision-Based Aid System for the Blind)
相關論文
★ 以Q-學習法為基礎之群體智慧演算法及其應用★ 發展遲緩兒童之復健系統研製
★ 從認知風格角度比較教師評量與同儕互評之差異:從英語寫作到遊戲製作★ 基於檢驗數值的糖尿病腎病變預測模型
★ 模糊類神經網路為架構之遙測影像分類器設計★ 複合式群聚演算法
★ 身心障礙者輔具之研製★ 指紋分類器之研究
★ 背光影像補償及色彩減量之研究★ 類神經網路於營利事業所得稅選案之應用
★ 一個新的線上學習系統及其於稅務選案上之應用★ 人眼追蹤系統及其於人機介面之應用
★ 結合群體智慧與自我組織映射圖的資料視覺化研究★ 追瞳系統之研發於身障者之人機介面應用
★ 以類免疫系統為基礎之線上學習類神經模糊系統及其應用★ 基因演算法於語音聲紋解攪拌之應用
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 視障人士位於不熟悉的環境中,想要了解整各環境的概況,必須使用白手杖(White Cane),去觸碰地面或是物體以確認前方是否可以行走以及障礙物的位置,而手杖不能碰觸的地方則無從得知。使用者只有在面臨障礙物時才被輔具所引導閃避,並沒有自主選擇行走路徑的能力,因此希望開發出一套系統,能夠將環境中的資訊,先告知視障人士,並配合白手杖的使用,根據系統的提示,自己選擇需要移動的方向,增加盲人行走的安全與自主性。
本論文中提出以立體視覺的方式,先切割出環境的路面區域,利用影像量化,非路面區域根據量化的影像區塊去做影像比對,利用分群的方法去除影像雜訊,建立出陌生環境資訊,偵測出障礙物體,並指出障礙物離盲人多遠與障礙物方向,當盲人在行走時,可以先預知行走環境的整體資訊,利用此資訊並搭配白手杖的使用,達到盲人行走安全,以達到解決使用者-「障礙物位於哪裡? 離障礙物體有多遠?障礙物大小?」的疑惑。藉由此初步的研究,即能令目前導盲的輔具化被動為主動,讓視障人士的行動更為自由。
摘要(英) White Cane is the most pervasive travel-aid for the blind. We present an idea of using stereo matching to develop a travel aid for the blind. In this approach, first we use a segmentation algorithm to segment the floor region from the image captured by the web camera. Then images are segmented into several non-overlapping homogeneous regions using a color segmentation algorithm. For each homogeneous region, a rectangular window, which is large enough to cover the region, is found. A local match with the found rectangular window size is then executed to find the disparity for the considered region. A clustering algorithm is adopted to cluster the disparities into several major different values. Finally, a piece-wise disparity map is constructed. Based on the disparity map, information about the unfamiliar environments in front of the blind will be output to them. With the information about the environment the blind will have less fear in walking through unfamiliar environments via white canes.
關鍵字(中) ★ 立體視覺
★ 行動輔具
關鍵字(英) ★ stereo matching
★ travel aid
論文目次 摘要 I
ABSTRACT II
第一章 緒論 1
1.1 研究動機 1
1.2 研究目標 3
1.3 論文架構 4
第二章 導盲輔具介紹與研究 5
2.1 導盲輔具 5
2.2 電子式行進輔具 5
2.3 引導式機器人 6
2.4 穿戴式輔具 8
2.5 導引式手杖 9
2.6 電子晶片或人工視網膜植入 10
2.7 立體視覺輔具 10
2.8 探討導盲輔具 11
第三章 研究方法與步驟 14
3.1 以視覺為基礎的導引系統(THE VISION-BASED TRAVEL AID) 14
3.1.1 系統硬體介紹 15
3.1.2 系統軟體演算法 17
3.2 路面影像生長(REGION GROWING) 18
3.2.1 RGB色彩空間 18
3.2.2 HSV色彩空間 19
3.2.3 色彩空間討論 21
3.2.4 切割路面影像 21
3.2.5 影像資訊量化 21
3.2.6 影像資訊減量 22
3.2.7 樣本比對 23
3.2.8 路面區域生長 24
3.3 色彩量化 25
3.3.1 色彩量化演算法分類 26
3.3.2 色彩量化演算法簡介 26
3.3.3 K-means群聚演算法 28
3.3.4 導盲影像色彩量化 31
3.4 影像矯正 34
3.4.1 色彩矯正 34
3.4.2 水平矯正 35
3.5 立體視覺(STEREO VISION) 38
3.5.1 影像深度 38
3.5.2 立體比對 41
3.5.3 選擇立體比對區域大小之探討 43
3.6 不相稱分群(DISPARITY CLUSTERING) 44
3.7 障礙物標示 47
3.8 距離估測 48
3.8.1 指數函數逼近 48
3.8.2 距離估測函數 49
3.9 物體長寬與物體方向判別 52
3.10 系統演算法流程 56
第四章 實驗分析與討論 58
4.1 實際環境圖 58
4.2 實際環境障礙物大小距離與方向 60
第五章 結論與展望 64
5.1 結論 64
參考文獻 66
參考文獻 [1] N. Ayache and F. Lustman, “Trinocular Stereo Vision for Robotics,” IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol.13, No.1, pp. 73-85. 1991.
[2] I. Ashdown, ”Octree color quantization,” in Radiosity-Aprogrammer’s Perspective. New York: Wiley, 1994.
[3] I. Ashdown, “Octree color quantization,” in Radiosity-A Programmer’s Perspective. New York: Wiley, 1994.
[4] J. Brabyn, “New developments in mobility and orientation aids for the blind,” Proceedings of the IEEE Transactions on Biomedical Engineering, Vol.29, pp. 285-289. April 1982.
[5] J. Brabyn, “Orientation and navigation systems for the blind: Overview of different approaches,” Hatfield Conference on Orientation and Navigation Systems for Blind Persons, Hatfield, England. 1995.
[6] J. Borenstein and I. Ulrich, "The GuideCane - A Computerized Travel Aid for the Active Guidance of Blind Pedestrians," Proceedings of the IEEE International Conference on Robotics and Automation, pp. 1283-1288. Albuquerque, NM, April 21-27, 1997.
[7] M. Bertozzi and A. Broggi, “GOLD: A Parallel Real-Time Stereo Vision System for Generic Obstacle and Lane Detection,” IEEE Transaction on Image Processing, Vol.7, No.1, pp. 62-81. 1998.
[8] S. T. Barnard and W.B. Thompson, “Disparity analysis of images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 2, pp. 330-340, 1980.
[9] Y. Boykov, O. Veksler and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE TPAMI, vol. 23, no. 11, pp. 1222-1239, 2001.
[10] G. Balakrishnan, G. Sainarayanan, R. Nagarajan, S. Yaccob, “On stereo processing procedure applied towards blind navigation aid-SVETA,” The 8th International Symposium on Signal Processing and Its Applications, pp. 567-570, 2005.
[11] P. B. Chou and C. M. Brown, “The theory and practice of Bayesian image labeling,” IJCV, vol. 4, no. 3, pp. 185-210, 1990.
[12] A. H. Dekker, “Kohonen neural networks for optimal color quantization,” Network: Computat. Neural Syst., vol. 5, pp. 351-367, 1994.
[13] A. J. Dekker, “Kohonen neural networks for optimal color quantization, ” Network: Computat. Neural Syst., vol. 5, pp. 351-367, 1994.
[14] R. G. Golledge, J. R. Marston, and C. M. Costanzo, “Attitudes of Visually Impaired Persons Toward the Use of Public Transportation.” Journal of Visual Impairment & Blindness, pp. 446-459. September -October 1997.
[15] M. Gervautz and W. Purgathofer, “A simple method for color quantization: Octree quantization.” In Graphics Gems, A. S. Glassner, Ed. New York: Academic, pp. 287-293, 1990.
[16] J. Hancock, M. Hebert, and C. Thorpe, "Laser intensity-based obstacle detection Intelligent Robots and Systems," IEEE/RSJ International Conference on Intelligent Robotic Systems, Vol. 3, pp. 1541-1546. 1998.
[17] C. Harris and M. Stephens, “A combined corner and edge detector," Proceedings of the 4th Alvey Vision Conference, pp. 147-151. 1988.
[18] R. Hartley and P. Sturm, “Triangulation,” Computer Vision and Image Understanding, Vol.68 , No 2, pp. 146-157. 1997.
[19] B. Heisele and W. Ritter, “Obstacle detection based on color blob flow,” Proceedings Intelligent Vehicles Symposium 1995, pp. 282-286. Detroit, 1995.
[20] I. S. Hsieh and K. C. Fan, “An adative clustering algorithm for color quantization,” Pattern Recognit. Lett. , vol. 21, pp. 337-346, 2000.
[21] P Heckbert, “Color image quantization for frame buffer display,” Comput. & Graph. , vol. 16, pp. 297-307,1982.
[22] H. Ishiguro and S. Tsuji, “Active Vision By Multiple Visual Agents,” Proceedings of the 1992 lEEE/RSJ International Conference on Intelligent Vehicles, Vol.3, pp. 2195-2202. 1992.
[23] Y. P. Jun, H. Yoon, J. W. Cho, “L*learning: a fast self-organizing feature map learning algorithm based on incremental ordering,” IEICE Transactions on Information & Systems, vol. E76, no. 6, pp.698-706, 1993.
[24] W. Kruger, W. Enkelmann, and S. Rossle, ”Real-time estimation and tracking of optical flow vectors for obstacle detection,” Proceedings of the Intelligent Vehicles Symposium, pp. 304-309. Detroit, 1995
[25] A. D. Kulkarni, Computer Vision and Fuzzy-Neural Systems, Prentice Hall, Inc., 2001.B. Heisele and W. Ritter, “Obstacle detection based on color blob flow,” Proceedings Intelligent Vehicles Symposium 1995, pp. 282-286. Detroit, 1995.
[26] N. Kanopoulos, N. Vasanthavada, and R.L. Baker, “Design of an Image Edge Detection Filter Using the Sobel Operator,” IEEE Journal of Solid-State Circuits, Vol. 23, No. 2, pp. 358-367, April 1988.
[27] J. M. Loomis, R. G. Golledge, and R. L. Klatzky, ”Personal guidance system for blind persons,” Hatfield Conference on Orientation and Navigation Systems for Blind Persons, Hatfield, England. February 1-2, 1995.
[28] Q. T. Luong, J. Weber, D. Koller, and J. Malik, “An integrated stereo-based approach to automatic vehicle guidance,” 5th International Conference on Computer Vision, pp. 52-57. June 1995.
[29] M. K. Leung, Y. Liu, and T. S. Huang, “Estimating 3d vehicle motion in an outdoor scene from monocular and stereo image sequences,” Proceedings of the IEEE Workshop on Visual Motion, pp. 62-68. 1991.
[30] L. M. Lorigo, R. A. Brooks and W. E. L. Grimsou, “Visually-Guided Obstacle Avoidance in Unstructured Environments,” IEEE Conference on Intelligent Robots and Systems, pp. 373-379. Sep. 1997.
[31] Y. Linde, A. Buzo and R. Gray, “An algorithm for vector quantizer design,” IEEE Tran. on Commun. vol. 28, NO 1, pp. 84-95,1980.
[32] Y. W. Lin and S. U. Lee, “On the color image segmentation algorithm based on the thresholding and the fuzzy C-means techniques,” Pattern Recognit., vol. 23, no. 9, pp. 935-952, 1990.
[33] D. Marr and T. Poggio, “Cooperative computation of stereo disparity,” Science, Vol. 194, pp. 283-287, 1976.
[34] S. Meers and K. Ward, “A vision system for providing 3D perception of the environment via transcutaneous electro-neural stimulation, “ The 8th International Conference on Information Visuallization, pp. 546-552, 2004.
[35] M. T. Orchard and C. A. Bouman, “Color quantization of images,” IEEE Tran. on Signal Processing, vol. 39, NO 12, PP. 2677-2690, 1991.
[36] N. Papamarkos, “Color reduction using local features and a SOFM neural network,” Int. J. Imag. Syst. Technol., vol. 10, no. 5, pp. 404-409.
[37] N. Papamarkos, Antonis E. Atsalakis, and Charalampos P. Strouthopoulos, “Adaptive Color Reduction,” IEEE Trans. on Systems, Man and Cybernetics-Part B:Cybernetics, Vol. 32, Feb 2002.
[38] K. T. Song and H. T. Chen, “Cooperative Map Building of Multiple Mobile Robots,” 6th International Conference on Mechatronics Technology, pp.535-540. Kitakyushu, Japan, Sep. 29-Oct. 3, 2002.
[39] K. T. Song and C. M. Lee, “Development of an Image Processing Card and Its Application to Mobile Manipulation,” Proceedings of 2002 ROC Automatic Control Conference, pp. 819-824. Tainan, Mar. 15-16, 2002.
[40] K. T. Song and Y. H. Chen, “Robot Control in Dynamic Environments Using a Fuzzy Clustering Network,” Proceedings of First IEEE-RAS International Conference on Humanoid Robots, MIT, Cambridge, Sep. 7-8, 2000.
[41] K. T. Song and C. Y. Lin, “Mobile Robot Control Using Stereo Vision,” Proceedings of 2001 ROC Automatic Control Conference, pp. 384-389. 2001.
[42] S. Shoval, J. Borenstein, and Y. Koren, “The Navbelt - A Computerized Travel Aid for the Blind,” Proceedings of the RESNA '93 conference, pp. 240-242. Las Vegas, Nevada, June 13-18, 1993.
[43] S. Shoval, J. Borenstein, and Y. Koren, “Auditory Guidance With the NavBelt - A Computerized Travel Aid for the Blind,” IEEE Transactions on Systems, Man, and Cybernetics, Vol. 28, No. 3, pp. 459-467. August, 1998.
[44] S. Shoval, and J. Borenstein, “The NavBelt – A Computerized Travel Aid for the Blind on Mobile Robotics Technology,” IEEE Transactions on Biomedical Engineering, Vol. 45, No. 11, pp. 107-116. Nov. 1998.
[45] P. Scheunders, “A genetic C-means clustering algorithm applied to color image quantization.” Pattern Recognit. , vol.30, no 6, pp. 859-886,1997.
[46] S. Shimizu, T. Kondo, T. Kohashi, M. Tsurata, T. Komuro, “A new algorithm for exposure control based on fuzzy logic for video cameras,” IEEE Transactions on Consumer Electronics, vol.38, No.3, pp.617-623, Aug. 1992.
[47] S. Shoval, J. Borenstein, and Y. Koren, “Mobile Robot Obstacle Avoidance in a Computerized Travel Aid for the Blind,” Proceedings of the 1994 IEEE International Conference on Robotics and Automation, pp. 2023-2029. San Diego, CA, May 8-13, 1994.
[48] P. Scheunders, “A comparison of clustering algorithms applied to color image quantization,” Pattern Recognit. Lett. , vol. 18, pp. 1379-1384, 1997.
[49] D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int’l J. Computer Vision, vol. 47, no. 1, pp. 7-42, 2002.
[50] S. T. Tseng and K. T. Song, “Real-time Image Tracking for Traffic Monitoring,” Proceedings of the IEEE 5th International Conference on Intelligent Transportation Systems, pp. 1-6. Singapore, Sep. 3-6, 2002.
[51] S. Tachi and K. Komority, “Guide dog robot,” 2nd Int. Congress on Robotics Research, pp. 333-340. Kyoto, Japan, 1984.
[52] J. T. Tou and R. C. Gonazlez, “Pattern Recognition Principles,” Reading MA:Addison-Wesley 1974.
[53] I. Ulrich and J. Borenstein, “The GuideCane - Applying Mobile Robot Technologies to Assist the Visually Impaired,” IEEE Transaction on Systems, Man, and Cybernetics-Part A: Systems and Humans, Vol. 31, No. 2, pp. 131-136. Mar. 2001.
[54] O. Verevka, “The local K-means algorithm for color image quantization,” M.Sc. dissertation, Univ. Alberta, Edmonton, AB,Canada, 1995.
[55] S. J. Wan, P. Prusinkiewicz, and S. K. M. Wong, “Variance based color image quantization for frame buffer display,” Color Res. Applicat., vol. 15, no. 1, pp. 52-58, 1990.
[56] Y. Xu, E. Saber, and A. M. Tekalp, “Object Segmentation and Labeling by Learning from Examples,” IEEE Transaction on Image Processing, pp. 627-638. 2003.
[57] C. Y. Yang and J. C. Lin, “RWM-cut for color image quantization,” Comput. & Graph. , vol. 20 pp. 577-588,1996.
[58] Currently Available Electronic Travel Aids for the Blind. (2005). http://www.noogenesis.com/eta/current.html
[59] SOUND Foresight Ltd, (2005). http://www.soundforesight.co.uk/index.html
指導教授 蘇木春(Mu-Chun Su) 審核日期 2006-7-24
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明