博碩士論文 93522062 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:74 、訪客IP:18.118.144.69
姓名 蔣育承(Yu-Chen Chiang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 以視覺為基礎之機器人導航及應用
(A Vision-Based Robot Navigation System and Its Applications)
相關論文
★ 以Q-學習法為基礎之群體智慧演算法及其應用★ 發展遲緩兒童之復健系統研製
★ 從認知風格角度比較教師評量與同儕互評之差異:從英語寫作到遊戲製作★ 基於檢驗數值的糖尿病腎病變預測模型
★ 模糊類神經網路為架構之遙測影像分類器設計★ 複合式群聚演算法
★ 身心障礙者輔具之研製★ 指紋分類器之研究
★ 背光影像補償及色彩減量之研究★ 類神經網路於營利事業所得稅選案之應用
★ 一個新的線上學習系統及其於稅務選案上之應用★ 人眼追蹤系統及其於人機介面之應用
★ 結合群體智慧與自我組織映射圖的資料視覺化研究★ 追瞳系統之研發於身障者之人機介面應用
★ 以類免疫系統為基礎之線上學習類神經模糊系統及其應用★ 基因演算法於語音聲紋解攪拌之應用
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 機器人在導航時,在環境中的定位是困難但是卻基本必須的工作。本論文提出一個機器人定位以及地圖建構之系統,透過此演算法及系統,機器人能自動探索巡邏已學習之環境,或是規劃最短路徑來到達使用者事先指定的地點。
首先機器人要學習環境資訊前,會先進入探索模式。在探索模式中,機器人使用其視覺與紅外線感測器,建構一個包括節點與邊所表示的圖形地圖。節點代表一個特定地點(如岔路等),而節點間可到達之路徑則由邊所相連。在每個節點上,會將其環境影像作為訓練資料集。當探索結束回到原點時,這些訓練資料集會用來訓練一個多層感知機來記憶環境資訊。探索模式結束後會建立一環境地圖,接著會進入操作模式。
在操作模式中,機器人能自動巡邏環境且讓使用者能在遠端監控影像,或是執行使用者給予的特定地點巡邏。在導航過程中,機器人能夠依照之前所訓練的多層感知機,得知所在位置。最後,我們訓練SONY 所出產的愛寶(AIBO)機器狗來在居家環境中實際導航,來驗證我們的演算法與系統。
摘要(英) Robot localization has been a very challenging task in mobile robotics since it in essential for a broad range of mobile robot tasks. This thesis proposes a new vision-based robot localization and map-building algorithm. Via the proposed algorithm, a robot can automatically patrol the environment whose environment information has been learned or plan a shortest path to visit some particular locations pre-specifies by the user.
To learn a new environment, a robot must first proceed to the exploration procedure (EP). In EP, a robot uses its vision and an infrared sensor to build a map of the unknown environment. The map is represented as a graph which consists of vertexes and edge. When a robot in navigating, a vertex is generated whenever a distinct environment (e.g. intersections, blind alleys, etc.) is detected and an edge is used to connect these vertex. At each vertex or particular location, images of the environment will be stared in a training data set. After the robot have finished the navigation tour and go back to the original starting position, a two-layer perceptron is trained to memorize the environment using the collected training data set. After the robot has built the environment map after the end of the EP, it enters the operation procedure (OP).
In OP, the robot may automatically patrol the environment and transmit images to remote clients via a web browser or execute a particular patrolling task assigned by the user. During a navigation tour, the robot knows its location by contract a match between the observation and the expectation as derived from the database. The match is computed by feeding the observation to the trained MLP. Finally, the performance of the proposed algorithm is demonstrated by training a SONY AIBO to navigate a home environment.
關鍵字(中) ★ 室內定位
★ 導航
★ 機器人
關鍵字(英) ★ localization
★ navigation
★ robot
論文目次 摘要 I
Abstract II
致謝 IV
目錄 V
圖目錄 VIII
表目錄 X
第一章 緒論 1
1.1 研究動機 1
1.2 研究目標 1
1.3 論文架構 2
第二章 相關研究與硬體介紹 3
2.1 導航相關介紹 3
2.1.1 環境定位(Localization) 3
2.1.2 避障(Obstacle Avoidance) 5
2.2 硬體介紹 6
第三章 系統架構與研究方法 8
3.1 初始與校正 9
3.1.1 RGB與HSV色彩空間轉換 9
3.1.2 樣版影像位移比對 (Template Matching) 12
3.1.3 基於樣版影像比對之轉向計算 13
3.1.4 邊緣偵測與絕對方向校正判斷 15
3.1.5 原點定位 22
3.2 探索模式 24
3.2.1 路寬計算 26
3.2.2 道路偵測 26
3.2.3 地圖表示方式 27
3.2.4 訓練地點選擇 29
3.2.5 取得環境影像特徵向量資訊 30
3.2.6 地點訓練 31
3.3 操作模式 32
3.3.1 路徑規劃 33
3.3.2 地點歸屬度的取得 35
3.3.3 地點判定 36
3.3.4 行走方向決策 39
3.3.5 目標方向計算 41
3.3.6 返回原路徑絕對方向計算 42
3.3.7 避障方向計算 43
第四章 實驗結果 46
4.1 系統架構簡介 46
4.2 運作狀態實驗 46
4.2.1 實驗環境 47
4.2.2 直走測試實驗 47
4.2.3 轉向測試 48
4.2.4 地點辨識率與地點歸屬度測試 51
4.3 探索模式實驗 54
4.3.1 地圖大小與地點誤差 54
4.3.2 地點到地點之角度誤差 56
4.4 操作模式測試 57
4.4.1 實驗結果 57
4.4.2 實驗示意圖 58
4.5 實驗與問題探討 64
4.5.1 探索模式討論 64
4.5.2 操作模式討論 64
第五章 結論與展望 65
參考文獻 67
參考文獻 [1] Y. Ando and S. Yuta, “Following a Wall by an Autonomous Mobile Root with a Sonar-Ring,” IEEE International Conference on Robotics and Automation, vol. 4, pp. 2599-2606, 1995.
[2] N. Ayache and F. Lustman, “Trinocular Stereo Vision for Robotics,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 13, no. 1, pp. 73-85, 1991.
[3] P. Bahl and V. N. Padmanabhan, “RADAR: an in-building RF-based user location and tracking system,” in INFOCOM 2000. Nineteenth Annual Joint Conference of the IEEE Computer and Communications, vol. 2, pp. 775-784, March 2000.
[4] M. Bertozzi and A. Broggi, “GOLD: A Parallel Real-Time Stereo Vision System for Generic Obstacle and Lane Detection,” IEEE Transaction on Image Processing, vol. 7, no. 1, pp. 62-81, 1998.
[5] J. Borenstein and Y. Koren, “Real-Time Obstacle Avoidance for Fast Mobile Robots,” IEEE Trans. Systems, Man, and Cybernetics, vol. 19, no. 5, pp. 1179-1187, 1989.
[6] R. Cassinis, D. Grana, and A. Rizzi, “Using Colour Information in an Omnidirectional Perception System for Autonomous Robot Localization,” Proceedings of the First Euromicro Workshop on Advanced Mobile Robot, pp. 172-176, Oct. 1996.
[7] C. T. Chang, “Design of Obstacle Avoidance and Navigation Strategies for an Automatic Guided Vehicle Using Camera Vision and Infrared Sensing,” Master Thesis, Electrical Engineering, N.T.U.T., 2001.
[8] A. Clerentin, L. Delahoche, and E. Brassart, “Cooperation between two omnidirectional perception systems for mobile robot localization,” IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 2, pp. 1499-1504, Nov. 2000.
[9] S. Ernst, C. Stiller, J. Goldbeck, and C. Roessig, “Camera calibration for lane and obstacle detection,” Proc. IEEE/IEEJ/JSAI International Conference on Intelligent Transportation Systems, pp. 356-361, Oct. 1999.
[10] E. Frontoni and P. Zingaretti, “A vision based algorithm for active robot localization,” Proc. IEEE International Symposium on Computational Intelligence in Robotics and Automation, pp. 347-352, June 2005.
[11] R. C. Gonazlez and R. E. woods, Digital image processing, 2nd. Addison-wesley, 1992.
[12] H. Haddad, M. Khatib, S. Lacroix, and R. Chatila, “Reactive navigation in outdoor environments using potential fields,” Proc. IEEE International Conference on Robotics and Automation, vol. 2, pp. 1232-1237, May 1998.
[13] Y. Han and H. Hahn, “Localization and Classification of Target Surfaces Using Two Pairs of Ultrasonic Sensors,” Elsevier Science on Robotics and Autonomous Systems, vol. 1, pp. 31-41, 2000.
[14] H. Ishiguro and S. Tsuji, “Active Vision By Multiple Visual Agents,” Proc. lEEE/RSJ International Conference on Intelligent Vehicles, vol. 3, pp. 2195-2202, 1992.
[15] G. Jang, S. Kim, J. Kim, and I. Kweon, “Metric localization using a single artificial landmark for indoor mobile robots,” IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2857-2862, Aug. 2005.
[16] M. R. Kabuka and A. E. Arenas, “Position Verification of a Mobile Robot Using Standard Pattern,” IEEE Journal of Robotics and Automation, vol. 3, no. 6, pp. 505-516, Dec. 1987.
[17] E. Kruse and F.M. Wahl, “Camera-based observation of obstacle motions to derive statistical data for mobile robot motion planning,” Proc. IEEE International Conference on Robotics and Automation, vol. 1, pp. 662-667, May 1998.
[18] K. Lawton and E. Shrecengost, “The Sony AIBO: Using IR for Maze Navigation, ” Tekkotsu:Homepage, Available: http://www.cs.cmu.edu/ ~tekkotsu/index.html.
[19] Y. W. Lin, “The Research of robot Building Map with an Ultrasonic Sensor,” Master Thesis, Department of Engineering Science, N.C.K.U., 2004.
[20] L. M. Lorigo, R. A. Brooks, and W. E. L. Grimsou, “Visually-Guided Obstacle Avoidance in Unstructured Environments,” IEEE Conference on Intelligent Robots and Systems, vol. 1, pp. 373-379, Sep. 1997.
[21] Q. T. Luong, J. Weber, D. Koller, and J. Malik, “An integrated stereo-based approach to automatic vehicle guidance,” 5th International Conference on Computer Vision, pp. 52-57, June 1995.
[22] Y. Matsumoto, M. Inaba, and H. Inoue, “Visual navigation using view-sequenced route representation,” Proc. IEEE International Conference on Robotics and Automation, vol. 1, pp. 83-88, Apr. 1996.
[23] L. Montano and J. R. Asensio, “Real-time robot navigation in unstructured environments using a 3D laser rangefinder,” Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 2, pp. 526-532, 1997.
[24] A. Ohya, A. Kosaka, and A. Kak, “Vision-based Navigation by a Mobile Robot with Obstacle Avoidance using Single-Camera Vision and Ultrasonic sensing,” IEEE Transactions on Robotics and Automatic, vol. 14, no. 6, pp. 969-978, Dec. 1998.
[25] E. M. Petriu, “Automated Guided Vehicle with Absolute Encoded Guide-path,” IEEE Transactions on Robotics and Automation, vol. 7, no. 4, pp. 562-565, Aug. 1991.
[26] C. C. Sun, “A Low-Cost Travel-Aid for the Blind,” Master Thesis, Department of Computer Science and Information Engineering, N.C.U., 2005.
[27] C. Thorpe, M. H. Hebert, T. Kanade, and S. A. Shafer, “Vision and Navigation for the Carnegie-Mellon Navlab,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 3, pp. 362-373, May 1988.
[28] P. Veelaert and W. Bogaerts, “Ultrasonic Potential Field Sensor for Obstacle Avoidance,” IEEE Trans. on Robotics and Automation, vol. 15, no. 4, Aug. 1999.
[29] M. A. Youssef, A. Agrawala, and A. U. Shankar, “ WLAN location determination via clustering and probability distributions,” Proceedings of the First IEEE International Conference on Pervasive Computing and Communications, pp. 143-150, March 2003.
[30] iRobot Corporation, Available: http://www.irobot.com/
[31] SECOM IS Lab. - Service Robot Group, Available: http://www.secom.co.jp/isl/e/org/CTD/srg/index.html
[32] Sony Global - AIBO Global Link, Available: http://www.sony.net/Products/aibo/
[33] 蘇木春,張孝德,「機器學習:類神經網路、模糊系統以及基因演算法則」,全華,1999。
[34] 蔡明志,「資料結構,使用C++」,碁峰資訊,1999。
指導教授 蘇木春(Mu-Chun Su) 審核日期 2006-7-24
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明