參考文獻 |
[1] ORB-SLAM2论文解读与总结URL:https://blog.csdn.net/zxcqlf/article/details/80198298
[2] 【圖解】3D感測技術發展與應用趨勢-大和有話說URL:https://reurl.cc/KA9m6M
[3] R. Mur-Artal and J. D. Tard´os, “ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras,” IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255–1262, 2017.
[4] R. Mur-Artal, J. D. Tard´os, J. M. M. Montiel, and D. G´alvez-L´opez,“ORB-SLAM2,” https://github.com/raulmur/ORB SLAM2, 2016.
[5] Wei-Chih Tu ,NTU Computer Vision: from Recognition to Geometry Lecture 14
[6] Stereo Vision: Algorithms and Applications, Stefano Mattoccia, Department of Computer Science (DISI), University of Bologna
[7] 机器视觉模型——畸变模型URL:https://www.guyuehome.com/33246
[8] Camera Calibration相機校正URL:https://reurl.cc/DgOR0Q
[9] M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W.Achtelik, and R. Siegwart, “The EuRoC micro aerial vehicle datasets,”The International Journal of Robotics Research, vol. 35, no. 10, pp.1157–1163, 2016.
[10] Improving your image matching results by 14% with one line of code URL:https://reurl.cc/eEkejM
[11] 高翔,張濤著作。視覺SLAM十四講:從理論到實踐
[12] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The KITTI dataset,” Int. J. Robot. Res., vol. 32, no. 11, pp. 1231–1237,2013.
[13]DFRobot DFR5092 wiki:https://wiki.dfrobot.com/DC_Motor_Driver_HAT_SKU_DFR0592
[14] 小觅智能| ORB-SLAM 学习笔记:https://zhuanlan.zhihu.com/p/47451004
[15]ROS wikil:http://wiki.ros.org/ROS/Tutorials
[16] 「(1) (PDF) BELID: Boosted Efficient Local Image Descriptor」. https://www.researchgate.net/publication/334230975_BELID_Boosted_Efficient_Local_Image_Descriptor (引見於 8月 21, 2021).
[17] J. Sturm, N. Engelhard, F. Endres, W. Burgard及D. Cremers, 作者, 「A benchmark for the evaluation of RGB-D SLAM systems」, 收入 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 10月 2012, 頁 573–580. doi: 10.1109/IROS.2012.6385773.
[18] 「A Raspberry Pi 2-based stereo camera depth meter | IEEE Conference Publication | IEEE Xplore」. https://ieeexplore.ieee.org/document/7986854 (引見於 7月 05, 2021).
[19] Q. Fu等, 作者, 「A Robust RGB-D SLAM System With Points and Lines for Low Texture Indoor Environments」, IEEE Sens. J., 卷 19, 期 21, 頁 9908–9920, 11月 2019, doi: 10.1109/JSEN.2019.2927405.
[20] H. Xu及J. Zhang, 作者, 「AANet: Adaptive Aggregation Network for Efficient Stereo Matching」, 2020, 頁 1959–1968. 引見於: 7月 05, 2021. [線上]. 載於:https://openaccess.thecvf.com/content_CVPR_2020/html/Xu_AANet_Adaptive_Aggregation_Network_for_Efficient_Stereo_Matching_CVPR_2020_paper.html
[21] 「Computer Vision Group - Visual SLAM - DSO: Direct Sparse Odometry」. https://vision.in.tum.de/research/vslam/dso (引見於 8月 18, 2021).
[22] 「Computer Vision Group - Visual SLAM - LSD-SLAM: Large-Scale Direct Monocular SLAM」. https://vision.in.tum.de/research/vslam/lsdslam (引見於 7月 05, 2021).
[23] H. Fu, M. Gong, C. Wang, K. Batmanghelich及D. Tao, 作者, 「Deep Ordinal Regression Network for Monocular Depth Estimation」, ArXiv180602446 Cs, 6月 2018, 引見於: 7月 05, 2021. [線上]. 載於: http://arxiv.org/abs/1806.02446
[24] C. Kerl, J. Sturm及D. Cremers, 作者, 「Dense visual SLAM for RGB-D cameras」, 收入 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 11月 2013, 頁 2100–2106. doi: 10.1109/IROS.2013.6696650.
[25] R. A. Güler, N. Neverova及I. Kokkinos, 作者, 「DensePose: Dense Human Pose Estimation In The Wild」, ArXiv180200434 Cs, 2月 2018, 引見於: 7月 05, 2021. [線上]. 載於: http://arxiv.org/abs/1802.00434
[26] A. Gordon, H. Li, R. Jonschkowski及A. Angelova, 作者, 「Depth from Videos in the Wild: Unsupervised Monocular Depth Learning from Unknown Cameras」, ArXiv190404998 Cs, 4月 2019, 引見於: 7月 05, 2021. [線上]. 載於: http://arxiv.org/abs/1904.04998
[27] C. Godard, O. Mac Aodha, M. Firman及G. Brostow, 作者, 「Digging Into Self-Supervised Monocular Depth Estimation」, ArXiv180601260 Cs Stat, 8月 2019, 引見於: 7月 05, 2021. [線上]. 載於: http://arxiv.org/abs/1806.01260
[28] R. A. Newcombe, S. J. Lovegrove及A. J. Davison, 作者, 「DTAM: Dense tracking and mapping in real-time」, 收入 2011 International Conference on Computer Vision, 11月 2011, 頁 2320–2327. doi: 10.1109/ICCV.2011.6126513.
[29] S. Zhang, Z. Wang, Q. Wang, J. Zhang, G. Wei及X. Chu, 作者, 「EDNet: Efficient Disparity Estimation with Cost Volume Combination and Attention-based Spatial Residual」, ArXiv201013338 Cs, 3月 2021, 引見於: 7月 05, 2021. [線上]. 載於: http://arxiv.org/abs/2010.13338
[30] 「Elastic Fusion」, Imperial College London. http://www.imperial.ac.uk/a-z-research/dyson-robotics-lab/downloads/elastic-fusion/ (引見於 8月 18, 2021).
[31] A. Kendall等, 作者, 「End-to-End Learning of Geometry and Context for Deep Stereo Regression」, ArXiv170304309 Cs, 3月 2017, 引見於: 7月 05, 2021. [線上]. 載於: http://arxiv.org/abs/1703.04309
[32] Q. Wang, S. Shi, S. Zheng, K. Zhao及X. Chu, 作者, 「FADNet: A Fast and Accurate Network for Disparity Estimation」, ArXiv200310758 Cs, 3月 2020, 引見於: 7月 05, 2021. [線上]. 載於: http://arxiv.org/abs/2003.10758
[33] C. A. Aguilera, C. Aguilera, C. A. Navarro及A. D. Sappa, 作者, 「Fast CNN Stereo Depth Estimation through Embedded GPU Devices」, Sensors, 卷 20, 期 11, Art. 期 11, 1月 2020, doi: 10.3390/s20113249.
[34] D. Wofk, F. Ma, T.-J. Yang, S. Karaman及V. Sze, 作者, 「FastDepth: Fast Monocular Depth Estimation on Embedded Systems」, ArXiv190303273 Cs, 3月 2019, 引見於: 7月 05, 2021. [線上]. 載於: http://arxiv.org/abs/1903.03273
[35] P. Fischer等, 作者, 「FlowNet: Learning Optical Flow with Convolutional Networks」, ArXiv150406852 Cs, 5月 2015, 引見於: 7月 05, 2021. [線上]. 載於: http://arxiv.org/abs/1504.06852
[36] F. Zhang, V. Prisacariu, R. Yang及P. H. S. Torr, 作者, 「GA-Net: Guided Aggregation Net for End-to-end Stereo Matching」, ArXiv190406587 Cs, 4月 2019, 引見於: 7月 05, 2021. [線上]. 載於: http://arxiv.org/abs/1904.06587
[37] Y. Cao, J. Xu, S. Lin, F. Wei及H. Hu, 作者, 「GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond」, ArXiv190411492 Cs, 4月 2019, 引見於: 8月 18, 2021. [線上]. 載於: http://arxiv.org/abs/1904.11492
[38] S. Kohlbrecher, J. Meyer, T. Graber, K. Petersen, U. Klingauf及O. Von Stryk, 作者, Hector Open Source Modules for Autonomous Mapping and Navigation with Rescue Robots. 2014, 頁 631. doi: 10.1007/978-3-662-44468-9_58.
[39] J. Engel, T. Schöps及D. Cremers, 作者, 「LSD-SLAM: Large-Scale Direct Monocular SLAM」, 收入 Computer Vision – ECCV 2014, 卷 8690, D. Fleet, T. Pajdla, B. Schiele及T. Tuytelaars, 編輯 Cham: Springer International Publishing, 2014, 頁 834–849. doi: 10.1007/978-3-319-10605-2_54.
[40] 「Mobile robot V-SLAM based on improved closed-loop detection algorithm | IEEE Conference Publication | IEEE Xplore」. https://ieeexplore.ieee.org/document/8785611 (引見於 7月 11, 2021).
[41] A. J. Davison, I. D. Reid, N. D. Molton及O. Stasse, 作者, 「MonoSLAM: Real-Time Single Camera SLAM」, IEEE Trans. Pattern Anal. Mach. Intell., 卷 29, 期 6, 頁 1052–1067, 6月 2007, doi: 10.1109/TPAMI.2007.1049.
[42] 「OpenSLAM.org」. https://openslam-org.github.io/gmapping.html (引見於 8月 18, 2021).
[43] R. Mur-Artal及J. D. Tardos, 作者, 「ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras」, IEEE Trans. Robot., 卷 33, 期 5, 頁 1255–1262, 10月 2017, doi: 10.1109/TRO.2017.2705103.
[44] C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. M. Montiel及J. D. Tardós, 作者, 「ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM」, IEEE Trans. Robot., 頁 1–17, 2021, doi: 10.1109/TRO.2021.3075644.
[43] 「Parallel Tracking and Mapping for Small AR Workspaces (PTAM)」. https://www.robots.ox.ac.uk/~gk/PTAM/ (引見於 8月 18, 2021).
[44] L. Tiwari, P. Ji, Q.-H. Tran, B. Zhuang, S. Anand及M. Chandraker, 作者, 「Pseudo RGB-D for Self-Improving Monocular SLAM and Depth Prediction」, ArXiv200410681 Cs, 8月 2020, 引見於: 8月 18, 2021. [線上]. 載於: http://arxiv.org/abs/2004.10681
[45] J.-R. Chang及Y.-S. Chen, 作者, 「Pyramid Stereo Matching Network」, ArXiv180308669 Cs, 3月 2018, 引見於: 7月 05, 2021. [線上]. 載於: http://arxiv.org/abs/1803.08669
[46] 「RTAB-Map」, RTAB-Map. http://introlab.github.io/rtabmap/ (引見於 8月 18, 2021).
[47] A. Seki及M. Pollefeys, 作者, 「SGM-Nets: Semi-Global Matching With Neural Networks」, 2017, 頁 231–240. 引見於: 7月 05, 2021. [線上]. 載於: https://openaccess.thecvf.com/content_cvpr_2017/html/Seki_SGM-Nets_Semi-Global_Matching_CVPR_2017_paper.html
[48] Y. Luo等, 作者, 「Single View Stereo Matching」, ArXiv180302612 Cs, 3月 2018, 引見於: 7月 05, 2021. [線上]. 載於: http://arxiv.org/abs/1803.02612
[49] F. Ma及S. Karaman, 作者, 「Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image」, ArXiv170907492 Cs, 2月 2018, 引見於: 7月 05, 2021. [線上]. 載於: http://arxiv.org/abs/1709.07492
[50] G. D. Caffaratti, M. G. Marchetta及R. Q. Forradellas, 作者, 「Stereo Matching through Squeeze Deep Neural Networks」, Intel. Artif., 卷 22, 期 63, Art. 期 63, 2月 2019, doi: 10.4114/intartif.vol22iss63pp16-38.
[51] S. Khamis, S. Fanello, C. Rhemann, A. Kowdle, J. Valentin及S. Izadi, 作者, 「StereoNet: Guided Hierarchical Refinement for Real-Time Edge-Aware Depth Prediction」, ArXiv180708865 Cs, 7月 2018, 引見於: 7月 05, 2021. [線上]. 載於: http://arxiv.org/abs/1807.08865
[52] C. Forster, M. Pizzoli及D. Scaramuzza, 作者, 「SVO: Fast semi-direct monocular visual odometry」, 收入 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 5月 2014, 頁 15–22. doi: 10.1109/ICRA.2014.6906584.
[53] M. Burri等, 作者, 「The EuRoC micro aerial vehicle datasets」, Int. J. Robot. Res., 卷35, 期 10, 頁 1157–1163, 9月 2016, doi: 10.1177/0278364915620033.
[54] M. Poggi, F. Aleotti, F. Tosi及S. Mattoccia, 作者, 「Towards real-time unsupervised monocular depth estimation on CPU」, ArXiv180611430 Cs, 7月 2018, 引見於: 7月 05, 2021. [線上]. 載於: http://arxiv.org/abs/1806.11430
[55] R. Ranftl, K. Lasinger, D. Hafner, K. Schindler及V. Koltun, 作者, 「Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer」, ArXiv190701341 Cs, 8月 2020, 引見於: 7月 05, 2021. [線上]. 載於: http://arxiv.org/abs/1907.01341
[56] 「Vision meets robotics: The KITTI dataset - A Geiger, P Lenz, C Stiller, R Urtasun,2013」.https://journals.sagepub.com/doi/full/10.1177/0278364913491297 (引見於 7月 05, 2021).
|