English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41628971      線上人數 : 3380
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/84001


    題名: LaVIS:融合雷射、視覺、慣性感測器的機器人定位和導航系統;LaVIS: Laser,Vision and Inertiak Sensing Fusion for Robot Positioning and Navigation
    作者: 王建鈞;Wang, Chien-Chun
    貢獻者: 資訊工程學系
    關鍵詞: 感測融合;定位;導航;Sensing Fusion;Positioning;Navigation
    日期: 2020-07-28
    上傳時間: 2020-09-02 17:53:04 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年來Lidar感測器被大量應用在掃地機器人環境感測,但通常僅提供平面的環境掃描,而缺乏高度的資訊。當掃地機器人進入某空間領域,可能因高度的限制無法離開此空間。使用視覺Simultaneous localization and mapping (SLAM)雖然可以解決空間障礙的問題,但精確度(accuracy)和強健性(robustness)仍存在改善空間。本研究提出一種感測融合方法,透過Lidar感測器建立第一層基礎(baseline)地圖,融合Lidar與RGBD攝影機的深度資訊與物件特徵建立空間影像,我們以機器人的Inertial measurement unit (IMU)座標系統作為母座標系統,讓其他感測器為子座標系統來定位目前機器人初始座標系統參考點。機器人的移動軌跡也結合進座標系統,透過運動編碼器與IMU資訊獲得運動路徑資訊,與影像、Lidar座標融合,我們使用A-Star與Dynamic window approach (DWA)融合兩種演算法從事機器人路徑規劃,完成一個以2D Lidar為主的感測融合SLAM系統:LaVIS。此一系統可以讓掃地機器人快速建立3D地圖。並規劃最佳路徑,動態閃避障礙物。最後我們以Machine Intelligence and Automation Technology (MIAT)方法論進行LaVIS系統設計和整合驗證,我們的實驗證明LaVIS系統具有良好的SLAM性能和精確性,可以應用在掃地機器人和廣泛的自主型機器人。;In recent years, Lidar sensors have been widely used in Vaccum robot environment sensing. Usually, only planar environmental scanning provides, but height information is lacking. When the Vaccum robot enters a specific space, it may not be able to leave this space due to height restrictions. Although using visual Simultaneous localization and mapping (SLAM) can solve the problem of spatial obstacles, there is still room for improvement in accuracy and robustness. We proposes a sensor fusion method, which uses Lidar sensors to create a first-level baseline map, fuse Lidar and RGBD camera depth information and object features to create a spatial image. We use the robot’s Inertial measurement unit (IMU) coordinate system as the parent coordinate system, Let other sensors be the sub-coordinate system to locate the current robot initial coordinate system reference point. The trajectory of the robot is also integrated into the coordinate system. The motion path information is obtained through the motion encoder and IMU information. It is fused with the image and Lidar coordinates. We use A-Star and Dynamic window approach (DWA) to integrate two algorithms for robot path planning. Lidar-based sensing fusion SLAM system: LaVIS. This system allows sweeping robots to build 3D maps quickly. And plan the best path to avoid obstacles dynamically. Finally, we use the Machine Intelligence and Automation Technology (MIAT) methodology for LaVIS system design and integration verification. Our experiments prove that the LaVIS system has good SLAM performance and accuracy, and can be applied to Vaccum robots and a wide range of autonomous robots. 
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML109檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明