中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/81447
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 78852/78852 (100%)
Visitors : 38483556      Online Users : 285
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/81447


    Title: 基於深度學習之單眼距離估測與機器人戶外行走控制
    Authors: 邱文欣;Chiu, Wen-Hsin
    Contributors: 電機工程學系
    Keywords: 機器人控制;避障控制;深度學習;單眼深度估測;Robot control;Obstacle avoidance control;Deep learning;Monocular depth prediction
    Date: 2019-07-25
    Issue Date: 2019-09-03 15:53:57 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 本論文設計並改良文獻[1]戶外導盲機器人的行走導引與避障功能,以幫助視障人士在戶外行走更為可靠。首先,使用者在手機上點選目的地,手機端透過Google map API規劃行徑路徑,藉由機器人當下與目的地的距離及偏航角,判斷並傳送直走、轉彎或停止的導航指令給主控制電腦。主控制電腦經由網路攝影機擷取影像,以語意分割技術辨識可行走的馬路區域,再利用深度學習技術對障礙物產生視差估測,將估測出的視差經由倒數方程式轉換為深度,再透過直方圖決定障礙物距離,此論文的距離估測,在0.8公尺到4公尺之間,約有80%的準確度。當辨識出可行走的馬路區域後,使用霍夫直線法畫出可行走之馬路右邊邊界線,並將道路區域劃分為多個區塊,每個區塊代表一段路徑,經由實驗找出適合的軌跡點。再利用模糊控制計算機器人左右輪的角速度,使機器人沿著軌跡點行進。但是行進中必須要有避障功能,所以上面所言深度學習加上視差轉換的方法,從網路單一攝影機的拍攝影像推算出0.8公尺到4公尺之間的障礙物距離。若有障礙物位在影像中心範圍內且距離機器人小於3.5公尺時,機器人會進行避障動作。若障礙物突然出現於前面1公尺範圍內時,機器人將停止移動,直到前方1公尺內沒有障礙物再繼續行走。藉由在戶外道路的實驗驗證,障礙物距離估測的準確率比[1]的結果提升,機器人行進控制也比[1]的方法更為穩定,使得導盲機器人能更準確且安全地到達目的地。;This thesis designs and improves the functions of moving guidance and obstacle avoidance for the guided robot from reference[1] such that the robot can be helpful to the blind much more in his/her daily life. First, the user clicks the destination on the cell phone, then the phone can plan the moving path for the robot by using Google map. According to the current position, attitude of the robot and the destination position, the phone will send the navigation command to the computer on the robot. This robot just uses one webcam to capture the image ahead, by using the semantic segmentation method and deep learning network, we can find the accessible road area and predict the disparity of the obstacle ahead of the robot, respectively. Based on the disparity and an inverse function, the depth map of the obstacle is obtained and the distance between the robot and obstacle is estimated from the analysis of the depth histogram. In this study, the distance from 0.8m to 4m can be estimated with 80% accuracy. When the accessible area is obtained, Hough line is created to present the border of the road at the right side of the robot. Let the accessible area of the road ahead of the robot be divided to several rectangular squares. Since the robot is forced to move along the right side of the road, then we can find the trajectory point in each square. By using fuzzy control technique, the speeds of both wheels are adjusted such that the robot can move following the trajectory points. Based on the above distance estimation for obstacles, when the obstacle is on the center of image and its estimated distance is about 3.5 m, the robot will start to avoid the obstacle; but it will stop when the obstacle suddenly appears 1m ahead, then it will move until the obstacle disappears. According to the outdoor experiment in NCU campus, the obstacle distance estimation is more accurate and the robot moving control is much more stable than that in [1] such that the robot can guide the blind reaching the destination safely and accurately.
    Appears in Collections:[Graduate Institute of Electrical Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML159View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明