博碩士論文 111521046 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:15 、訪客IP:18.217.35.130
姓名 王璿瑋(Hsuan-Wei Wang)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 應用 YOLOv8和深度學習消除操作視野遮蔽與待測物建模於遠端超音波掃描系統
(Application of YOLOv8 and Deep Learning for Removing Visual Occlusion and Modeling Target Object in Remote Ultrasound Scanning System)
相關論文
★ 使用梳狀濾波器於相位編碼之穩態視覺誘發電位腦波人機介面★ 應用電激發光元件於穩態視覺誘發電位之腦波人機介面判斷
★ 智慧型手機之即時生理顯示裝置研製★ 多頻相位編碼之閃光視覺誘發電位驅動大腦人機介面
★ 以經驗模態分解法分析穩態視覺誘發電位之大腦人機界面★ 利用經驗模態分解法萃取聽覺誘發腦磁波訊號
★ 明暗閃爍視覺誘發電位於遙控器之應用★ 使用整體經驗模態分解法進行穩態視覺誘發電位腦波遙控車即時控制
★ 使用模糊理論於穩態視覺誘發之腦波人機介面判斷★ 利用正向模型設計空間濾波器應用於視覺誘發電位之大腦人機介面之雜訊消除
★ 智慧型心電圖遠端監控系統★ 使用隱馬可夫模型於穩態視覺誘發之腦波人機介面判斷 與其腦波控制遙控車應用
★ 使用類神經網路於肢體肌電訊號進行人體關節角度預測★ 使用等階集合法與影像不均勻度修正於手指靜脈血管影像切割
★ 應用小波編碼於多通道生理訊號傳輸★ 結合高斯混合模型與最大期望值方法於相位編碼視覺腦波人機介面之目標偵測
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2026-6-30以後開放)
摘要(中) 在當今時代,遠端機器人超音波系統(Robotic Ultrasound System, RUS)的研究正在 蓬勃發展,特別是對於控制與視覺處理等領域備受關注。本研究應用現有的深度學習 技術來消除操作視野中的遮蔽物,首先攝影機的即時串流影像會透過我們訓練的 YOLOv8實例分割模型偵測是否有障礙物出現,並對障礙物區域進行遮罩消除,接著 透過現有的視訊修復模型(Decoupled Spatial-Temporal Transformer for Video Inpainting, DSTT)對消除的缺失區域進行影像修復,DSTT模型將根據過去的影像計算Attention 值來填補當前影像缺失區域,對於最初的影像序列我們會插入沒有障礙物在內的初始 影像幫助後續的修復工作,如此來達成消除遮蔽物的目標。為了實現自動掃描減少人 員操作,我們利用Intel深度相機對待檢測物體做表面輪廓的三維建模,建模結果會發 送至強化學習的虛擬環境進行模擬,也會傳送回操作端的圖形使用者介面(Graphical User Interface, GUI)視窗,操作者可以透過GUI來選取欲掃描之終點,選取後在強化 學習的虛擬環境進行模擬掃描,確認路徑與安全後,發送模擬結果的關節角數據給機 械手臂執行相應的掃描動作。實驗結果驗證三維建模對不同形狀物體(平面、斜面、 球體)的適用範圍,平面與斜面深度誤差小於0.041公分,面積誤差小於3.3%,球體 部分在切線角度小於25.6的深度誤差小於為0.17公分。在即時操作影像中,當遮蔽物 的11%進入即可被偵測,並且消除於操作者即時影像中,後續實驗結果顯示,我們將 初始影像插入影序列使PSNR與SSIM變的更好。本研究將現有的深度學習技術應用 於遠端RUS,成功解決了臨床上遮蔽物阻擋視野和掃描自動化等問題,提升了系統的 效率與安全性。
摘要(英) In the current era, research on Remote Robotic Ultrasound Systems (RUS) is flourishing, particularly in areas concerning control and visual processing. This study employs existing deep learning techniques to eliminate occlusions in the operating field of view. First, the real time streaming images from the camera are processed by our trained YOLOv8 instance segmentation model to detect whether any obstacles appear, and the obstacles in the detected regions are masked out. Next, the missing regions caused by the removal of these obstacles are restored using a state-of-the-art video inpainting model called Decoupled Spatial Temporal Transformer (DSTT). The DSTT model calculates an attention value based on previous frames to fill in the missing regions in the current frame. For the initial image sequence, we insert frames without any obstacle to facilitate subsequent inpainting, thereby achieving the goal of removing occlusions. In order to realize automated scanning and reduce manual operation, we use an Intel depth camera to perform 3D surface modeling of the object to be inspected. The modeling results are transmitted to a reinforcement learning environment for simulation, as well as sent back to the operator’s Graphical User Interface (GUI). Through the GUI, the operator selects the desired endpoint for scanning. The scanning process is then simulated within the reinforcement learning environment to validate the path and ensure safety. Once validated, the joint angle data from the simulation is sent to the robotic arm to perform the corresponding scanning action. Experimental results confirm that the 3D modeling approach is applicable to objects of different shapes (planar, inclined, spherical). For planar and inclined surfaces, the depth error is less than 0.041 cm, and the area error is under 3.3%. For spheres, at tangential angles below 25.6°, the depth error is less than 0.17 cm. In the real-time operational images, once 11% of the occlusion appears, it can be detected and removed from the operator’s live view. Further experiments demonstrate that inserting an obstacle-free initial image into the sequence improves both PSNR and SSIM. This study applies existing deep learning technologies to remote RUS, successfully addressing clinical challenges related to occlusion and automated scanning, thereby enhancing both efficiency and safety of the system.
關鍵字(中) ★ 機器人超音波系統
★ 遮蔽物消除
★ 實例分割
★ 三維建模
★ 影像修復
關鍵字(英)
論文目次 中文摘要 i
Abstract ii
目錄 iv
圖目錄 vi
表目錄 viii
第一章 緒論 1
1-1 論文章節架構 1
1-2 研究動機目標與文獻探討 2
第二章 原理介紹 6
2-1 相機成像原理 6
2-2 YOLOv8介紹 9
2-3 DSTT介紹 11
2-4 影像修復評估指標 14
2-4-1 PSNR 14
2-4-2 SSIM 15
第三章 研究設計與方法 17
3-1 系統架構與設備 17
3-1-1 機械手臂 18
3-1-2 Siemens Acuson P300超音波設備 19
3-1-3 Intel Realsense D455深度相機 20
3-2 使用者端 22
3-3 三維建模系統 23
3-4 操作監控系統 33
第四章 實驗結果與討論 36
4-1 建模結果 36
4-2 物體去除 42
第五章 結論討論與未來展望 46
參考文獻 49
參考文獻 [1] P. G. Newman and G. S. Rozycki, "The history of ultrasound," Surgical clinics of north America, vol. 78, no. 2, pp. 179-195, 1998.
[2] J. K. Dave, M. E. Mc Donald, P. Mehrotra, A. R. Kohut, J. R. Eisenbrey, and F. Forsberg, "Recent technological advancements in cardiac ultrasound imaging," Ultrasonics, vol. 84, pp. 329-340, 2018.
[3] J. P. McGahan, J. Rose, T. L. Coates, D. H. Wisner, and P. Newberry, "Use of ultrasonography in the patient with acute abdominal trauma," Journal of ultrasound in medicine, vol. 16, no. 10, pp. 653-662, 1997.
[4] A. Monteagudo and I. E. Timor-Tritsch, "Ultrasound of the fetal brain," Ultrasound Clinics, vol. 2, no. 2, pp. 217-244, 2007.
[5] V. K. Sudarshan et al., "Application of wavelet techniques for cancer diagnosis using ultrasound images: a review," Computers in biology and medicine, vol. 69, pp. 97-111, 2016.
[6] F. Chen, J. Liu, and H. Liao, "3D catheter shape determination for endovascular navigation using a two-step particle filter and ultrasound scanning," IEEE transactions on medical imaging, vol. 36, no. 3, pp. 685-695, 2016.
[7] L. Beales, S. Wolstenhulme, J. Evans, R. West, and D. Scott, "Reproducibility of ultrasound measurement of the abdominal aorta," Journal of British Surgery, vol. 98, no. 11, pp. 1517-1525, 2011.
[8] H. Sharma, L. Drukker, P. Chatelain, R. Droste, A. T. Papageorghiou, and J. A. Noble, "Knowledge representation and learning of operator clinical workflow from full-length routine fetal ultrasound scan videos," Medical Image Analysis, vol. 69, p. 101973, 2021.
[9] Q. Huang and J. Lan, "Remote control of a robotic prosthesis arm with six-degree-of-freedom for ultrasonic scanning and three-dimensional imaging," Biomedical signal processing and control, vol. 54, p. 101606, 2019.
[10] M.-A. Janvier et al., "Performance evaluation of a medical robotic 3D-ultrasound imaging system," Medical image analysis, vol. 12, no. 3, pp. 275-290, 2008.
[11] A. M. Priester, S. Natarajan, and M. O. Culjat, "Robotic ultrasound systems in medicine," IEEE transactions on ultrasonics, ferroelectrics, and frequency control, vol. 60, no. 3, pp. 507-523, 2013.
[12] K. C. Lau, E. Y. Y. Leung, P. W. Y. Chiu, Y. Yam, J. Y. W. Lau, and C. C. Y. Poon, "A flexible surgical robotic system for removal of early-stage gastrointestinal cancers by endoscopic submucosal dissection," IEEE Transactions on Industrial Informatics, vol. 12, no. 6, pp. 2365-2374, 2016.
[13] S. An and S. Y. Ko, "Toward an automatic diagnostic scanning robotic system using ultrasound images," in 2022 22nd International Conference on Control, Automation and Systems (ICCAS), 2022: IEEE, pp. 1199-1202.
[14] S. Chen, Z. Li, Y. Lin, F. Wang, and Q. Cao, "Automatic ultrasound scanning robotic system with optical waveguide-based force measurement," International Journal of Computer Assisted Radiology and Surgery, vol. 16, no. 6, pp. 1015-1025, 2021.
[15] R. Goel, F. Abhimanyu, K. Patel, J. Galeotti, and H. Choset, "Autonomous ultrasound scanning using bayesian optimization and hybrid force control," in 2022 International Conference on Robotics and Automation (ICRA), 2022: IEEE, pp. 8396-8402.
[16] M. R. Sobhani, H. Ozum, G. Yaralioglu, A. Ergun, and A. Bozkurt, "Portable low cost ultrasound imaging system," in 2016 IEEE International Ultrasonics Symposium (IUS), 2016: IEEE, pp. 1-4.
[17] M. Bucolo, G. Bucolo, A. Buscarino, A. Fiumara, L. Fortuna, and S. Gagliano, "Remote ultrasound scan procedures with medical robots: towards new perspectives between medicine and engineering," Applied Bionics and Biomechanics, vol. 2022, no. 1, p. 1072642, 2022.
[18] H.-h. Chai et al., "Successful use of a 5g-based robot-assisted remote ultrasound system in a Care Center for Disabled Patients in rural China," Frontiers in Public Health, vol. 10, p. 915071, 2022.
[19] F. Conti, J. Park, and O. Khatib, "Interface design and control strategies for a robot assisted ultrasonic examination system," in Experimental Robotics: The 12th International Symposium on Experimental Robotics, 2014: Springer, pp. 97-113.
[20] F. Najafi and N. Sepehri, "A robotic wrist for remote ultrasound imaging," Mechanism and machine theory, vol. 46, no. 8, pp. 1153-1170, 2011.
[21] A. AbbasiMoshaii and F. Najafi, "Design, evaluation and prototyping of a new robotic mechanism for ultrasound imaging," Journal of Computational Applied Mechanics, vol. 50, no. 1, pp. 108-117, 2019.
[22] Z. Chen, Y. Chen, and Q. Huang, "Development of a wireless and near real-time 3D ultrasound strain imaging system," IEEE Transactions on Biomedical Circuits and Systems, vol. 10, no. 2, pp. 394-403, 2015.
[23] Q. Huang, B. Wu, J. Lan, and X. Li, "Fully automatic three-dimensional ultrasound imaging based on conventional B-scan," IEEE transactions on biomedical circuits and systems, vol. 12, no. 2, pp. 426-436, 2018.
[24] F. Abi-Farraj, N. Pedemonte, and P. R. Giordano, "A visual-based shared control architecture for remote telemanipulation," in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016: IEEE, pp. 4266-4273.
[25] W. Quan, J. Chen, Y. Liu, D.-M. Yan, and P. Wonka, "Deep learning-based image and video inpainting: A survey," International Journal of Computer Vision, vol. 132, no. 7, pp. 2367-2400, 2024.
[26] S. W. Oh, S. Lee, J.-Y. Lee, and S. J. Kim, "Onion-peel networks for deep video completion," in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 4403-4412.
[27] Y. Zeng, J. Fu, and H. Chao, "Learning joint spatial-temporal transformations for video inpainting," in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVI 16, 2020: Springer, pp. 528-543.
[28] R. Liu et al., "Decoupled spatial-temporal transformer for video inpainting," arXiv preprint arXiv:2104.06637, 2021.
[29] Z. Li, C.-Z. Lu, J. Qin, C.-L. Guo, and M.-M. Cheng, "Towards an end-to-end framework for flow-guided video inpainting," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 17562-17571.
[30] K. Zhang, J. Peng, J. Fu, and D. Liu, "Exploiting optical flow guidance for transformer-based video inpainting," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024.
[31] G. Thiry, H. Tang, R. Timofte, and L. Van Gool, "Towards Online Real-Time Memory-based Video Inpainting Transformers," arXiv preprint arXiv:2403.16161, 2024.
[32] H.-A. Le. "Camera model: intrinsic parameters." https://lhoangan.github.io/camera-params/ (accessed december 23, 2024).
[33] RangeKing. "mmyolo." https://github.com/RangeKing/mmyolo/tree/main/configs/yolov8 (accessed december 23, 2024).
[34] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE transactions on image processing, vol. 13, no. 4, pp. 600-612, 2004.
[35] E. S. Gastal and M. M. Oliveira, "Domain transform for edge-aware image and video processing," in ACM SIGGRAPH 2011 papers, 2011, pp. 1-12.
[36] A. Telea, "An image inpainting technique based on the fast marching method," Journal of graphics tools, vol. 9, no. 1, pp. 23-34, 2004.
[37] L. E. Peterson, "K-nearest neighbor," Scholarpedia, vol. 4, no. 2, p. 1883, 2009.
[38] M. Kazhdan, M. Bolitho, and H. Hoppe, "Poisson surface reconstruction," in Proceedings of the fourth Eurographics symposium on Geometry processing, 2006, vol. 7, no. 4.
[39] R. Y. Tsai and R. K. Lenz, "A new technique for fully autonomous and efficient 3 d robotics hand/eye calibration," IEEE Transactions on robotics and automation, vol. 5, no. 3, pp. 345-358, 1989.
[40] M. Tabatabaeipour et al., "Mitigating RGB-D camera errors for robust ultrasonic inspections using a force-torque sensor," Nondestructive Testing and Evaluation, pp. 1-22, 2024.
[41] G. Ning, X. Zhang, and H. Liao, "Autonomic robotic ultrasound imaging system based on reinforcement learning," IEEE transactions on biomedical engineering, vol. 68, no. 9, pp. 2787-2797, 2021.
指導教授 李柏磊 審核日期 2025-1-20
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明