博碩士論文 111522071 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:94 、訪客IP:3.15.14.245
姓名 張凱東(Kai-Tung Chang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱
(Accelerated Point Cloud Rendering and Density Enhancement via Depth Completion: An Improved ?3???? SLAM System Implementation)
相關論文
★ 基於注意力和記憶動態融合的單物件追蹤方法
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2026-7-5以後開放)
摘要(中) 同時定位與地圖構建(SLAM)系統分為傳統方法和基於機器學習的方法。傳統的 SLAM 系統採用幾何和概率模型在靜態環境中實現高精度,但在動態環境中面臨計算複雜性和適應性的挑戰。基於機器學習的 SLAM 系統利用深度學習,擅長處理非結構化數據和動態場景,但需要大量訓練數據,並且通常缺乏可解釋性。

我們的目標是通過結合深度學習的模型來增強傳統的 SLAM 系統,使傳統的
SLAM 系統更加強大和全面。本論文在 ?3???? 框架下優化和加速了點雲渲染過程,並利用深度學習模型解決了因光達特性導致的建圖隙縫與漏洞。
摘要(英) Simultaneous Localization and Mapping (SLAM) systems are divided into traditional and machine learning-based methods. Traditional SLAM employs geometric and probabilistic models to achieve high precision in static environments but faces challenges with computational complexity and adaptability in dynamic environments.Machine learning-based SLAM, utilizing deep learning, excels in handling unstructured data and dynamic scenarios but requires substantial training data and often lacks interpretability.

We aim to enhance traditional SLAM systems by incorporating the advantages of deep learning model, making traditional SLAM systems more robust and comprehensive. In this paper, we optimize and accelerate the point cloud rendering process under the ?3???? framework and use a deep learning model to solve the mapping gaps caused by the characteristics of LiDAR
關鍵字(中) ★ 同時定位與地圖構建 關鍵字(英) ★ Simultaneous localization and mapping
★ SLAM
論文目次 Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Research Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1 Traditional and Machine Learning-based SLAM Method . . . . 3
2.2 Traditional SLAM Method . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Machine Learning-based SLAM Method . . . . . . . . . . . . . 5
2.4 Light Detection And Ranging . . . . . . . . . . . . . . . . . . . . 6
2.4.1 Mechanical LiDAR . . . . . . . . . . . . . . . . . . . . 7
2.4.2 Solid-State LiDAR . . . . . . . . . . . . . . . . . . . . . 8
2.4.3 Application Area Comparison . . . . . . . . . . . . . . 8
3 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1 R3LIV E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.1 R3LIV E System Overview . . . . . . . . . . . . . . . . 11
3.2 Point Clouds Completion . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Depth Completion and Depth Estimate . . . . . . . . . . . . . . 17
3.4 Semantic Segmentation . . . . . . . . . . . . . . . . . . . . . . . 19
4 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2 GPU Acceleration . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.3 Optimized Point Cloud Rendering Process . . . . . . . . . . . . 24
4.4 Depth Completion and Inverse Projection . . . . . . . . . . . . . 29
4.4.1 Segment-Based Depth Completion . . . . . . . . . . . 31
5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.1 Experiments Environment . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.3 Experiment - GPU Acceleration . . . . . . . . . . . . . . . . . . . 35
5.4 Experiment - Depth Completion and Inverse Projection . . . . . 39
6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Bibliography 46
參考文獻 [1] Keisuke Fujii. “Extended kalman filter”. In: Refernce Manual 14 (2013), p. 41.
[2] Petar M Djuric et al. “Particle filtering”. In: IEEE signal processing magazine 20.5
(2003), pp. 19–38.
[3] General Laser. Ouster OS0-128 LiDAR Sensor. https : / / www .
generationrobots . com / en / 404054 - ouster - ultra - wide - fov -
os0-lidar-rev-7.html. Accessed: 2024-06-29. 2024.
[4] CR Kennedy. Livox Avia LiDAR. https://survey.crkennedy.com.au/
products/livoxavia/livox-avia-lidar. Accessed: 2024-06-29. 2024.
[5] Jesse Levinson et al. “Towards fully autonomous driving: Systems and algo-
rithms”. In: 2011 IEEE intelligent vehicles symposium (IV). IEEE. 2011, pp. 163–
168.
[6] Adam Bry, Abraham Bachrach, and Nicholas Roy. “State estimation for ag-
gressive flight in GPS-denied environments using onboard sensing”. In: 2012
IEEE international conference on robotics and automation. IEEE. 2012, pp. 1–8.
[7] Fei Gao et al. “Flying on point clouds: Online trajectory generation and au-
tonomous navigation for quadrotors in cluttered environments”. In: Journal of
Field Robotics 36.4 (2019), pp. 710–733.
[8] Fanze Kong et al. “Avoiding dynamic small obstacles with onboard sensing
and computation on aerial robots”. In: IEEE Robotics and Automation Letters 6.4
(2021), pp. 7869–7876.
[9] Zheng Liu, Fu Zhang, and Xiaoping Hong. “Low-cost retina-like robotic lidars
based on incommensurable scanning”. In: IEEE/ASME Transactions on Mecha-
tronics 27.1 (2021), pp. 58–68.
[10] Wei Xu and Fu Zhang. “Fast-lio: A fast, robust lidar-inertial odometry package
by tightly-coupled iterated kalman filter”. In: IEEE Robotics and Automation Let-
ters 6.2 (2021), pp. 3317–3324.
[11] Jiarong Lin, Xiyuan Liu, and Fu Zhang. “A decentralized framework for si-
multaneous calibration, localization and mapping with multiple LiDARs”. In:
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
IEEE. 2020, pp. 4870–4877.
[12] Zheng Liu and Fu Zhang. “Balm: Bundle adjustment for lidar mapping”. In:
IEEE Robotics and Automation Letters 6.2 (2021), pp. 3184–3191.
[13] Xiyuan Liu and Fu Zhang. “Extrinsic calibration of multiple lidars of small fov
in targetless environments”. In: IEEE Robotics and Automation Letters 6.2 (2021),
pp. 2036–2043.
[14] Chongjian Yuan et al. “Pixel-level extrinsic self calibration of high resolution
lidar and camera in targetless environments”. In: IEEE Robotics and Automation
Letters 6.4 (2021), pp. 7517–7524.
[15] Jiarong Lin and Fu Zhang. “Loam livox: A fast, robust, high-precision LiDAR
odometry and mapping package for LiDARs of small FoV”. In: 2020 IEEE In-
ternational Conference on Robotics and Automation (ICRA). IEEE. 2020, pp. 3126–
3131.
[16] Jiarong Lin et al. “R2LIVE: A Robust, Real-Time, LiDAR-Inertial-Visual
Tightly-Coupled State Estimator and Mapping”. In: IEEE Robotics and Automa-
tion Letters 6.4 (2021), pp. 7469–7476.
[17] Tixiao Shan et al. “Lvi-sam: Tightly-coupled lidar-visual-inertial odometry via
smoothing and mapping”. In: 2021 IEEE international conference on robotics and
automation (ICRA). IEEE. 2021, pp. 5692–5698.
[18] Xingxing Zuo et al. “Lic-fusion: Lidar-inertial-camera odometry”. In: 2019
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE.
2019, pp. 5848–5854.
[19] Xingxing Zuo et al. “Lic-fusion 2.0: Lidar-inertial-camera odometry with
sliding-window plane-feature tracking”. In: 2020 IEEE/RSJ International Con-
ference on Intelligent Robots and Systems (IROS). IEEE. 2020, pp. 5112–5119.
[20] Weikun Zhen and Sebastian Scherer. “Estimating the localizability in tunnel-
like environments using LiDAR and UWB”. In: 2019 International Conference on
Robotics and Automation (ICRA). IEEE. 2019, pp. 4903–4908.
[21] Haoyu Zhou, Zheng Yao, and Mingquan Lu. “UWB/LiDAR coordinate match-
ing method with anti-degeneration capability”. In: IEEE Sensors Journal 21.3
(2020), pp. 3344–3352.
[22] César Debeunne and Damien Vivet. “A review of visual-LiDAR fusion based
simultaneous localization and mapping”. In: Sensors 20.7 (2020), p. 2068.
[23] Ji Zhang and Sanjiv Singh. “Laser–visual–inertial odometry and mapping with
high robustness and low drift”. In: Journal of field robotics 35.8 (2018), pp. 1242–
1264.
[24] Weizhao Shao et al. “Stereo visual inertial lidar simultaneous localization and
mapping”. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS). IEEE. 2019, pp. 370–377.
[25] Wei Wang et al. “DV-LOAM: Direct visual lidar odometry and mapping”. In:
Remote Sensing 13.16 (2021), p. 3340.
[26] Jiarong Lin and Fu Zhang. “R3LIVE: A Robust, Real-time, RGB-colored,
LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping pack-
age”. In: 2022 International Conference on Robotics and Automation (ICRA). IEEE.
2022, pp. 10672–10678.
[27] Wei Xu et al. “Fast-lio2: Fast direct lidar-inertial odometry”. In: IEEE Transac-
tions on Robotics 38.4 (2022), pp. 2053–2073.
[28] Yatian Pang et al. “Masked autoencoders for point cloud self-supervised learn-
ing”. In: European conference on computer vision. Springer. 2022, pp. 604–621.
[29] Fangchang Ma, Guilherme Venturelli Cavalheiro, and Sertac Karaman. “Self-
supervised sparse-to-dense: Self-supervised depth completion from lidar and
monocular camera”. In: 2019 International Conference on Robotics and Automation
(ICRA). IEEE. 2019, pp. 3288–3295.
[30] Yanchao Yang, Alex Wong, and Stefano Soatto. “Dense depth posterior (ddp)
from single image and sparse range”. In: Proceedings of the IEEE/CVF Conference
on Computer Vision and Pattern Recognition. 2019, pp. 3353–3362.
[31] Maximilian Jaritz et al. “Sparse and dense data with cnns: Depth comple-
tion and semantic segmentation”. In: 2018 International Conference on 3D Vision
(3DV). IEEE. 2018, pp. 52–60.
[32] Lihe Yang et al. “Depth anything: Unleashing the power of large-scale unla-
beled data”. In: arXiv preprint arXiv:2401.10891 (2024).
[33] Alexander Kirillov et al. “Segment anything”. In: Proceedings of the IEEE/CVF
International Conference on Computer Vision. 2023, pp. 4015–4026.
[34] Alexey Dosovitskiy et al. “An image is worth 16x16 words: Transformers for
image recognition at scale”. In: arXiv preprint arXiv:2010.11929 (2020).
[35] Sahyeon Lee, Hyunjun Kim, and Sung-Han Sim. “Nontarget-based displace-
ment measurement using LiDAR and camera”. In: Automation in Construction
142 (2022), p. 104493.
[36] Livox. Livox AVIA User Manual. 2023. URL: https://terra-1-g.djicdn.
com / 65c028cd298f4669a7f0e40e50ba1131 / demo / avia / Livox %
20AVIA%20User%20Manual_EN.pdf (visited on 05/23/2024).
[37] Ziv Lin. R3LIVE Dataset. https : / / github . com / ziv - lin / r3live _
dataset. 2023.
[38] Yixi Cai, Wei Xu, and Fu Zhang. “ikd-tree: An incremental kd tree for robotic
applications”. In: arXiv preprint arXiv:2102.10808 (2021).
指導教授 施國琛 林智揚 審核日期 2024-8-19
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明