參考文獻 |
[1] Keisuke Fujii. “Extended kalman filter”. In: Refernce Manual 14 (2013), p. 41.
[2] Petar M Djuric et al. “Particle filtering”. In: IEEE signal processing magazine 20.5
(2003), pp. 19–38.
[3] General Laser. Ouster OS0-128 LiDAR Sensor. https : / / www .
generationrobots . com / en / 404054 - ouster - ultra - wide - fov -
os0-lidar-rev-7.html. Accessed: 2024-06-29. 2024.
[4] CR Kennedy. Livox Avia LiDAR. https://survey.crkennedy.com.au/
products/livoxavia/livox-avia-lidar. Accessed: 2024-06-29. 2024.
[5] Jesse Levinson et al. “Towards fully autonomous driving: Systems and algo-
rithms”. In: 2011 IEEE intelligent vehicles symposium (IV). IEEE. 2011, pp. 163–
168.
[6] Adam Bry, Abraham Bachrach, and Nicholas Roy. “State estimation for ag-
gressive flight in GPS-denied environments using onboard sensing”. In: 2012
IEEE international conference on robotics and automation. IEEE. 2012, pp. 1–8.
[7] Fei Gao et al. “Flying on point clouds: Online trajectory generation and au-
tonomous navigation for quadrotors in cluttered environments”. In: Journal of
Field Robotics 36.4 (2019), pp. 710–733.
[8] Fanze Kong et al. “Avoiding dynamic small obstacles with onboard sensing
and computation on aerial robots”. In: IEEE Robotics and Automation Letters 6.4
(2021), pp. 7869–7876.
[9] Zheng Liu, Fu Zhang, and Xiaoping Hong. “Low-cost retina-like robotic lidars
based on incommensurable scanning”. In: IEEE/ASME Transactions on Mecha-
tronics 27.1 (2021), pp. 58–68.
[10] Wei Xu and Fu Zhang. “Fast-lio: A fast, robust lidar-inertial odometry package
by tightly-coupled iterated kalman filter”. In: IEEE Robotics and Automation Let-
ters 6.2 (2021), pp. 3317–3324.
[11] Jiarong Lin, Xiyuan Liu, and Fu Zhang. “A decentralized framework for si-
multaneous calibration, localization and mapping with multiple LiDARs”. In:
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
IEEE. 2020, pp. 4870–4877.
[12] Zheng Liu and Fu Zhang. “Balm: Bundle adjustment for lidar mapping”. In:
IEEE Robotics and Automation Letters 6.2 (2021), pp. 3184–3191.
[13] Xiyuan Liu and Fu Zhang. “Extrinsic calibration of multiple lidars of small fov
in targetless environments”. In: IEEE Robotics and Automation Letters 6.2 (2021),
pp. 2036–2043.
[14] Chongjian Yuan et al. “Pixel-level extrinsic self calibration of high resolution
lidar and camera in targetless environments”. In: IEEE Robotics and Automation
Letters 6.4 (2021), pp. 7517–7524.
[15] Jiarong Lin and Fu Zhang. “Loam livox: A fast, robust, high-precision LiDAR
odometry and mapping package for LiDARs of small FoV”. In: 2020 IEEE In-
ternational Conference on Robotics and Automation (ICRA). IEEE. 2020, pp. 3126–
3131.
[16] Jiarong Lin et al. “R2LIVE: A Robust, Real-Time, LiDAR-Inertial-Visual
Tightly-Coupled State Estimator and Mapping”. In: IEEE Robotics and Automa-
tion Letters 6.4 (2021), pp. 7469–7476.
[17] Tixiao Shan et al. “Lvi-sam: Tightly-coupled lidar-visual-inertial odometry via
smoothing and mapping”. In: 2021 IEEE international conference on robotics and
automation (ICRA). IEEE. 2021, pp. 5692–5698.
[18] Xingxing Zuo et al. “Lic-fusion: Lidar-inertial-camera odometry”. In: 2019
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE.
2019, pp. 5848–5854.
[19] Xingxing Zuo et al. “Lic-fusion 2.0: Lidar-inertial-camera odometry with
sliding-window plane-feature tracking”. In: 2020 IEEE/RSJ International Con-
ference on Intelligent Robots and Systems (IROS). IEEE. 2020, pp. 5112–5119.
[20] Weikun Zhen and Sebastian Scherer. “Estimating the localizability in tunnel-
like environments using LiDAR and UWB”. In: 2019 International Conference on
Robotics and Automation (ICRA). IEEE. 2019, pp. 4903–4908.
[21] Haoyu Zhou, Zheng Yao, and Mingquan Lu. “UWB/LiDAR coordinate match-
ing method with anti-degeneration capability”. In: IEEE Sensors Journal 21.3
(2020), pp. 3344–3352.
[22] César Debeunne and Damien Vivet. “A review of visual-LiDAR fusion based
simultaneous localization and mapping”. In: Sensors 20.7 (2020), p. 2068.
[23] Ji Zhang and Sanjiv Singh. “Laser–visual–inertial odometry and mapping with
high robustness and low drift”. In: Journal of field robotics 35.8 (2018), pp. 1242–
1264.
[24] Weizhao Shao et al. “Stereo visual inertial lidar simultaneous localization and
mapping”. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS). IEEE. 2019, pp. 370–377.
[25] Wei Wang et al. “DV-LOAM: Direct visual lidar odometry and mapping”. In:
Remote Sensing 13.16 (2021), p. 3340.
[26] Jiarong Lin and Fu Zhang. “R3LIVE: A Robust, Real-time, RGB-colored,
LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping pack-
age”. In: 2022 International Conference on Robotics and Automation (ICRA). IEEE.
2022, pp. 10672–10678.
[27] Wei Xu et al. “Fast-lio2: Fast direct lidar-inertial odometry”. In: IEEE Transac-
tions on Robotics 38.4 (2022), pp. 2053–2073.
[28] Yatian Pang et al. “Masked autoencoders for point cloud self-supervised learn-
ing”. In: European conference on computer vision. Springer. 2022, pp. 604–621.
[29] Fangchang Ma, Guilherme Venturelli Cavalheiro, and Sertac Karaman. “Self-
supervised sparse-to-dense: Self-supervised depth completion from lidar and
monocular camera”. In: 2019 International Conference on Robotics and Automation
(ICRA). IEEE. 2019, pp. 3288–3295.
[30] Yanchao Yang, Alex Wong, and Stefano Soatto. “Dense depth posterior (ddp)
from single image and sparse range”. In: Proceedings of the IEEE/CVF Conference
on Computer Vision and Pattern Recognition. 2019, pp. 3353–3362.
[31] Maximilian Jaritz et al. “Sparse and dense data with cnns: Depth comple-
tion and semantic segmentation”. In: 2018 International Conference on 3D Vision
(3DV). IEEE. 2018, pp. 52–60.
[32] Lihe Yang et al. “Depth anything: Unleashing the power of large-scale unla-
beled data”. In: arXiv preprint arXiv:2401.10891 (2024).
[33] Alexander Kirillov et al. “Segment anything”. In: Proceedings of the IEEE/CVF
International Conference on Computer Vision. 2023, pp. 4015–4026.
[34] Alexey Dosovitskiy et al. “An image is worth 16x16 words: Transformers for
image recognition at scale”. In: arXiv preprint arXiv:2010.11929 (2020).
[35] Sahyeon Lee, Hyunjun Kim, and Sung-Han Sim. “Nontarget-based displace-
ment measurement using LiDAR and camera”. In: Automation in Construction
142 (2022), p. 104493.
[36] Livox. Livox AVIA User Manual. 2023. URL: https://terra-1-g.djicdn.
com / 65c028cd298f4669a7f0e40e50ba1131 / demo / avia / Livox %
20AVIA%20User%20Manual_EN.pdf (visited on 05/23/2024).
[37] Ziv Lin. R3LIVE Dataset. https : / / github . com / ziv - lin / r3live _
dataset. 2023.
[38] Yixi Cai, Wei Xu, and Fu Zhang. “ikd-tree: An incremental kd tree for robotic
applications”. In: arXiv preprint arXiv:2102.10808 (2021). |