博碩士論文 104582004 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator王建鈞zh_TW
DC.creatorChien-Chun Wangen_US
dc.date.accessioned2020-7-28T07:39:07Z
dc.date.available2020-7-28T07:39:07Z
dc.date.issued2020
dc.identifier.urihttp://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=104582004
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract近年來Lidar感測器被大量應用在掃地機器人環境感測,但通常僅提供平面的環境掃描,而缺乏高度的資訊。當掃地機器人進入某空間領域,可能因高度的限制無法離開此空間。使用視覺Simultaneous localization and mapping (SLAM)雖然可以解決空間障礙的問題,但精確度(accuracy)和強健性(robustness)仍存在改善空間。本研究提出一種感測融合方法,透過Lidar感測器建立第一層基礎(baseline)地圖,融合Lidar與RGBD攝影機的深度資訊與物件特徵建立空間影像,我們以機器人的Inertial measurement unit (IMU)座標系統作為母座標系統,讓其他感測器為子座標系統來定位目前機器人初始座標系統參考點。機器人的移動軌跡也結合進座標系統,透過運動編碼器與IMU資訊獲得運動路徑資訊,與影像、Lidar座標融合,我們使用A-Star與Dynamic window approach (DWA)融合兩種演算法從事機器人路徑規劃,完成一個以2D Lidar為主的感測融合SLAM系統:LaVIS。此一系統可以讓掃地機器人快速建立3D地圖。並規劃最佳路徑,動態閃避障礙物。最後我們以Machine Intelligence and Automation Technology (MIAT)方法論進行LaVIS系統設計和整合驗證,我們的實驗證明LaVIS系統具有良好的SLAM性能和精確性,可以應用在掃地機器人和廣泛的自主型機器人。zh_TW
dc.description.abstractIn recent years, Lidar sensors have been widely used in Vaccum robot environment sensing. Usually, only planar environmental scanning provides, but height information is lacking. When the Vaccum robot enters a specific space, it may not be able to leave this space due to height restrictions. Although using visual Simultaneous localization and mapping (SLAM) can solve the problem of spatial obstacles, there is still room for improvement in accuracy and robustness. We proposes a sensor fusion method, which uses Lidar sensors to create a first-level baseline map, fuse Lidar and RGBD camera depth information and object features to create a spatial image. We use the robot’s Inertial measurement unit (IMU) coordinate system as the parent coordinate system, Let other sensors be the sub-coordinate system to locate the current robot initial coordinate system reference point. The trajectory of the robot is also integrated into the coordinate system. The motion path information is obtained through the motion encoder and IMU information. It is fused with the image and Lidar coordinates. We use A-Star and Dynamic window approach (DWA) to integrate two algorithms for robot path planning. Lidar-based sensing fusion SLAM system: LaVIS. This system allows sweeping robots to build 3D maps quickly. And plan the best path to avoid obstacles dynamically. Finally, we use the Machine Intelligence and Automation Technology (MIAT) methodology for LaVIS system design and integration verification. Our experiments prove that the LaVIS system has good SLAM performance and accuracy, and can be applied to Vaccum robots and a wide range of autonomous robots. en_US
DC.subject感測融合zh_TW
DC.subject定位zh_TW
DC.subject導航zh_TW
DC.subjectSensing Fusionen_US
DC.subjectPositioningen_US
DC.subjectNavigationen_US
DC.titleLaVIS:融合雷射、視覺、慣性感測器的機器人定位和導航系統zh_TW
dc.language.isozh-TWzh-TW
DC.titleLaVIS: Laser,Vision and Inertiak Sensing Fusion for Robot Positioning and Navigationen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明