博碩士論文 109323032 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:42 、訪客IP:3.135.217.228
姓名 李冠輝(Kuan-Hui Li)  查詢紙本館藏   畢業系所 機械工程學系
論文名稱 使用 YOLOv4 網路與 LiDAR 相機之動態參考坐標系定位方法開發
(A Positioning Method for Dynamic Reference Frame with YOLOv4 Network and LiDAR Camera)
相關論文
★ 雙光子光致聚合微製造系統之研發★ 雙光子光致聚合五軸微製造系統之雷射加工路徑生成研究
★ 椎弓根螺釘定位演算法及導引夾治具自動化設計流程開發★ 雙光子聚合微製造技術以能量均勻橢圓體為基之曝光時間最佳化研究
★ 雙光子光致聚合微製造以弦高誤差為基之切層演算法★ 雙光子光致聚合微製造技術以螺旋線雷射掃描路徑增強微結構強度研究
★ 雙光子聚合微製造技術之三維結構 製造品質改進研究★ 利用二維多重圖像建構三維三角網格模型的生成與品質改進
★ 組織工程用冷凍成型製造系統 之自動化製作流程開發★ 自動相機校正與二維影像輪廓萃取研究
★ 基於雙光子光致聚合技術之四軸微製造系統製作高深寬比結構之研究★ 冷凍成型積層製造之機台設計與組織工程支架製作參數調校研究
★ 基於二維影像輪廓重建三維模型技術之多視角相機群組空間座標系統整合★ 應用於大型物體三維模型重建之多重二維校正板相機校正流程開發
★ 組織工程用冷凍成型積層製造之固態水支撐結構生成研究★ 聚醚醚酮之積層製造系統開發
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2028-2-1以後開放)
摘要(中) 近年來,因為手術導航系統十分可靠的幫助醫師們更安全且精準的完成手術任務,該系統最近已經被廣泛的應用在各式各樣的臨床手術上。但是,手術導航系統仍因為其在醫療市場上昂貴的價格,無法更有效的普及於手術教學使用上。
然而有鑑於三維光學量測技術如立體視覺、結構光與飛時測距等方法不斷的進步,消費級別的深度相機與光達(Light Detection And Ranging, LiDAR)相機等較便宜的儀器都能夠更有效率的取得目標的三維資訊,例如光達技術可以透過光束發射與接收的時間差,計算出相機與目標間的距離。也因為光達準確的量測精度與可以立即取得大範圍座標資訊的特性,其經常被使用在機器人與自動駕駛方面的科技應用。
光達相機擁有比現今市面上主流的立體視覺深度相機更準確的量測精度,本研究嘗試以光達相機開發出一個基於YOLOv4網路追蹤動態參考坐標系的醫療手術定位方法。由YOLOv4在彩色影像進行物件偵測並結合對應的深度資訊,擬合出標記物的座標,最後依照其幾何關係判斷動態參考坐標系的位置與方向。本研究的誤差實驗在以光達相機為基準進行63個位置且每個位置各200筆的座標量測。實驗結果顯示標記球球心的擬合誤差在可在3mm以內,證明本研究所建構之系統具有可靠的精度與穩定性。
摘要(英) Surgical navigation systems have been widely used in clinical medicine in recent years. This is because it shows high reliability in helping doctors accomplish their surgery more precisely and safer. However, surgical navigation systems on the market are usually expensive. The high price of the system makes it hard to be used on surgical training.
Due to the improvement of 3D (three-dimensional) measurement technology such as stereo vision, structure light, and time of flight (ToF), cheaper devices like depth cameras and LiDAR (light detection and ranging) can also acquire depth information effectively. For example, LiDAR acquires the relative distance between the target and the camera by calculating the time difference between the emission and reception of the laser beam. Because of its high precision and rapid acquisition of large-area information advantages, LiDAR is widely used in the application of robotic and autonomous vehicle systems.
Due to the measure accuracy of LiDAR camera is better than depth camera with stereo vision which is commonly used in modern systems. This research develops a positioning method for DRF(dynamic reference frame) based on the YOLOv4 (you only look once version 4) network and a LiDAR camera. The algorithm can detect the object on the RGB image and project the detect result to the relative depth map. Then we can fit the marker position by the coordinates. After fitting, all centers of markers can be determined and compared with the known dynamic reference frame structure to calculate the position and orientation of the DRF. The experiment of this research measured coordinates of 63 positions. The experimental results show that the errors are all lower than 3mm. This results prove that our proposed method has reliable accuracy and stability.
關鍵字(中) ★ LiDAR 相機
★ YOLOv4
★ 手術導航系統
★ 點雲
★ 物件偵測
關鍵字(英) ★ LiDAR camera
★ YOLOv4
★ surgical navigation
★ point cloud
★ object tracking
論文目次 摘要 I
Abstract II
誌謝 III
目錄 IV
圖目錄 VI
表目錄 IX
第一章 緒論 1
1-1前言 1
1-2文獻回顧 3
1-3研究動機與目的 11
1-4論文架構 12
第二章 研究與理論說明 13
2-1手術導航系統簡介 13
2-2主流深度相機測距原理 15
2-3物件偵測 19
2-4標記球心座標擬合 23
第三章 研究方法 26
3-1系統架構 26
3-2設備介紹 28
3-3定位流程 32
3-4定位演算法 34
3-5訓練YOLOv4模型 41
3-6平面移動誤差實驗 44
第四章 實驗結果與討論 51
4-1 YOLOv4訓練結果與測試 51
4-2演算法DRF辨識結果 53
4-3標記球位移誤差實驗結果 55
4-4 DRF定位系統誤差結果分析 64
第五章 結論與未來展望 66
5-1結論 66
5-2未來展望 67
第六章 參考文獻 68
參考文獻 [1] Chi, C., Du, Y., Ye, J., Kou, D., Qiu, J., Wang, J., & Chen, X. (2014). Intraoperative imaging-guided cancer surgery: from current fluorescence molecular imaging methods to future multi-modality imaging technology. Theranostics, 4(11), 1072.
[2] Zheng, G., Kowal, J., Ballester, M. A. G., Caversaccio, M., & Nolte, L. P. (2007). Registration techniques for computer navigation. Current Orthopaedics, 21(3), 170-179.
[3] Chiou, S. Y., Zhang, Z. Y., Liu, H. L., Yan, J. L., Wei, K. C., & Chen, P. Y. (2022). Augmented Reality Surgical Navigation System for External Ventricular Drain. Healthcare, 10(10), 1815.
[4] Zhang, F., Lei, T., Li, J., Cai, X., Shao, X., Chang, J., & Tian, F. (2018). Real-time calibration and registration method for indoor scene with joint depth and color camera. International Journal of Pattern Recognition and Artificial Intelligence, 32(7), 1854021.
[5] Fu, L., Majeed, Y., Zhang, X., Karkee, M., & Zhang, Q. (2020). Faster R–CNN–based apple detection in dense-foliage fruiting-wall trees using RGB and depth features for robotic harvesting. Biosystems Engineering, 197, 245-256.
[6] Li, Y., He, L., Jia, J., Lv, J., Chen, J., Qiao, X., & Wu, C. (2021). In-field tea shoot detection and 3D localization using an RGB-D camera. Computers and Electronics in Agriculture, 185, 106149.
[7] Zhou, Z., Wu, B., Duan, J., Zhang, X., Zhang, N., & Liang, Z. (2017). Optical surgical instrument tracking system based on the principle of stereo vision. Journal of biomedical optics, 22(6), 065005.
[8] Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.
[9] Hernandez, D., Garimella, R., Eltorai, A. E., & Daniels, A. H. (2017). Computer‐assisted Orthopaedic Surgery. Orthopaedic Surgery, 9(2), 152-158.
[10] Fuentes-Pacheco, J., Ruiz-Ascencio, J., & Rendón-Mancha, J. M. (2015). Visual simultaneous localization and mapping: a survey. Artificial intelligence review, 43(1), 55-81.
[11] Geiger, A., Lenz, P., Stiller, C., & Urtasun, R. (2013). Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11), 1231-1237.
[12] Gomez-Ojeda, R., Moreno, F. A., Zuniga-Noël, D., Scaramuzza, D., & Gonzalez-Jimenez, J. (2019). PL-SLAM: A stereo SLAM system through the combination of points and line segments. IEEE Transactions on Robotics, 35(3), 734-746.
[13] Martínez-Corral, M., & Javidi, B. (2018). Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems. Advances in Optics and Photonics, 10(3), 512-566.
[14] Eitel, J. U., Höfle, B., Vierling, L. A., Abellán, A., Asner, G. P., Deems, J. S., & Vierling, K. T. (2016). Beyond 3-D: The new spectrum of lidar applications for earth and ecological sciences. Remote Sensing of Environment, 186, 372-392.
[15] Duan, X., Gao, L., Wang, Y., Li, J., Li, H., & Guo, Y. (2018). Modelling and experiment based on a navigation system for a cranio-maxillofacial surgical robot. Journal of Healthcare Engineering, 2018, 4670852.
[16] Northern Digital Incorporated 官方網站,取自https://www.ndigital.com/optical-measurement-technology/polaris-tools-and-accessories/
[17] Jiang, Q., Shao, F., Gao, W., Chen, Z., Jiang, G., & Ho, Y. S. (2018). Unified no-reference quality assessment of singly and multiply distorted stereoscopic images. IEEE Transactions on Image Processing, 28(4), 1866-1881.
[18] EDN Taiwan,3D視覺為機器人加上「眼睛」,取自https://www.edntaiwan.com/2019061
0nt31-3d-vision-gives-robots-guidance/
[19] Carranza-García, M., Torres-Mateo, J., Lara-Benítez, P., & García-Gutiérrez, J. (2020). On the performance of one-stage and two-stage object detectors in autonomous vehicles using camera data. Remote Sensing, 13(1), 89.
[20] Nick Bourdakos,Custom-Object-Detection,取自https://github.com/bourdakos1/Custom
-Object-Detection
[21] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. Proceedings of the IEEE conference on computer vision and pattern recognition, 779-788.
[22] Wang, C. Y., Liao, H. Y. M., Wu, Y. H., Chen, P. Y., Hsieh, J. W., & Yeh, I. H. (2020). CSPNet: A new backbone that can enhance learning capability of CNN. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 390-391.
[23] He, K., Zhang, X., Ren, S., & Sun, J. (2015). Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 37(9), 1904-1916.
[24] Roy, A. M., Bose, R., & Bhaduri, J. (2022). A fast accurate fine-grain object detection model based on YOLOv4 deep neural network. Neural Computing and Applications, 34(5), 3895-3921.
[25] Liu, S., Qi, L., Qin, H., Shi, J., & Jia, J. (2018). Path aggregation network for instance segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition, 8759-8768.
[26] 周威,YOLO V4-網路結構和損失函數解析,取自https://zhuanlan.zhihu.com/p/150
127712
[27] Björck, Å. (1990). Least squares methods. Handbook of numerical analysis, 1, 465-652.
[28] Intel® RealSense™ 官方網站,取自https://www.intelrealsense.com/lidar-camera-l515/
[29] Northern Digital Inc., Polaris Spectra Tool Kit Guide, Revision 1, August 2006.
[30] Intel RealSense Team, Intel® RealSense™ LiDAR Camera L515 Datasheet, Revision 003, January 2021
[31] Bi, S., Gu, Y., Zou, J., Wang, L., Zhai, C., & Gong, M. (2021). High precision optical tracking system based on near infrared trinocular stereo vision. Sensors, 21(7), 2528.
[32] Özgüner, O., Shkurti, T., Huang, S., Hao, R., Jackson, R. C., Newman, W. S., & Çavuşoğlu, M. C. (2020). Camera-robot calibration for the da vinci robotic surgery system. IEEE Transactions on Automation Science and Engineering, 17(4), 2154-2161.
[33] Su, Y., Gao, W., Liu, Z., Sun, S., & Fu, Y. (2020). Hybrid marker-based object tracking using Kinect v2. IEEE Transactions on Instrumentation and Measurement, 69(9), 6436-6445.
[34] Zhang, T., Wang, J., Song, S., & Meng, M. Q. H. (2022). Wearable Surgical Optical Tracking System Based on Multi-Modular Sensor Fusion. IEEE Transactions on Instrumentation and Measurement, 71, 1-11.
[35]衛生福利部,衛福部發布住院醫師工時指引,取自https://www.mohw.gov.tw/cp-2736-8859-1.html
[36] Gumprecht, H. K., Widenka, D. C., & Lumenta, C. B. (1999). Brain Lab VectorVision neuronavigation system: technology and clinical experiences in 131 cases. Neurosurgery, 44(1), 97-104.
[37] Ewurum, C. H., Guo, Y., Pagnha, S., Feng, Z., & Luo, X. (2018). Surgical navigation in orthopedics: workflow and system review. Intelligent Orthopaedics, 47-63.
[38] 余政叡,「整合擴增實境之內視鏡腦手術用導航系統研發」,國立中央大學,碩士論文,民國110年。
指導教授 廖昭仰(Chao-Yaug Liao) 審核日期 2023-1-12
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明