中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/93612
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 41253623      Online Users : 350
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/93612


    Title: 使用 YOLOv4 網路與 LiDAR 相機之動態參考坐標系定位方法開發;A Positioning Method for Dynamic Reference Frame with YOLOv4 Network and LiDAR Camera
    Authors: 李冠輝;Li, Kuan-Hui
    Contributors: 機械工程學系
    Keywords: LiDAR 相機;YOLOv4;手術導航系統;點雲;物件偵測;LiDAR camera;YOLOv4;surgical navigation;point cloud;object tracking
    Date: 2023-01-12
    Issue Date: 2024-09-19 17:21:35 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 近年來,因為手術導航系統十分可靠的幫助醫師們更安全且精準的完成手術任務,該系統最近已經被廣泛的應用在各式各樣的臨床手術上。但是,手術導航系統仍因為其在醫療市場上昂貴的價格,無法更有效的普及於手術教學使用上。
    然而有鑑於三維光學量測技術如立體視覺、結構光與飛時測距等方法不斷的進步,消費級別的深度相機與光達(Light Detection And Ranging, LiDAR)相機等較便宜的儀器都能夠更有效率的取得目標的三維資訊,例如光達技術可以透過光束發射與接收的時間差,計算出相機與目標間的距離。也因為光達準確的量測精度與可以立即取得大範圍座標資訊的特性,其經常被使用在機器人與自動駕駛方面的科技應用。
    光達相機擁有比現今市面上主流的立體視覺深度相機更準確的量測精度,本研究嘗試以光達相機開發出一個基於YOLOv4網路追蹤動態參考坐標系的醫療手術定位方法。由YOLOv4在彩色影像進行物件偵測並結合對應的深度資訊,擬合出標記物的座標,最後依照其幾何關係判斷動態參考坐標系的位置與方向。本研究的誤差實驗在以光達相機為基準進行63個位置且每個位置各200筆的座標量測。實驗結果顯示標記球球心的擬合誤差在可在3mm以內,證明本研究所建構之系統具有可靠的精度與穩定性。
    ;Surgical navigation systems have been widely used in clinical medicine in recent years. This is because it shows high reliability in helping doctors accomplish their surgery more precisely and safer. However, surgical navigation systems on the market are usually expensive. The high price of the system makes it hard to be used on surgical training.
    Due to the improvement of 3D (three-dimensional) measurement technology such as stereo vision, structure light, and time of flight (ToF), cheaper devices like depth cameras and LiDAR (light detection and ranging) can also acquire depth information effectively. For example, LiDAR acquires the relative distance between the target and the camera by calculating the time difference between the emission and reception of the laser beam. Because of its high precision and rapid acquisition of large-area information advantages, LiDAR is widely used in the application of robotic and autonomous vehicle systems.
    Due to the measure accuracy of LiDAR camera is better than depth camera with stereo vision which is commonly used in modern systems. This research develops a positioning method for DRF(dynamic reference frame) based on the YOLOv4 (you only look once version 4) network and a LiDAR camera. The algorithm can detect the object on the RGB image and project the detect result to the relative depth map. Then we can fit the marker position by the coordinates. After fitting, all centers of markers can be determined and compared with the known dynamic reference frame structure to calculate the position and orientation of the DRF. The experiment of this research measured coordinates of 63 positions. The experimental results show that the errors are all lower than 3mm. This results prove that our proposed method has reliable accuracy and stability.
    Appears in Collections:[Graduate Institute of Mechanical Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML9View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明