參考文獻 |
[1] Chi, C., Du, Y., Ye, J., Kou, D., Qiu, J., Wang, J., & Chen, X. (2014). Intraoperative imaging-guided cancer surgery: from current fluorescence molecular imaging methods to future multi-modality imaging technology. Theranostics, 4(11), 1072.
[2] Zheng, G., Kowal, J., Ballester, M. A. G., Caversaccio, M., & Nolte, L. P. (2007). Registration techniques for computer navigation. Current Orthopaedics, 21(3), 170-179.
[3] Chiou, S. Y., Zhang, Z. Y., Liu, H. L., Yan, J. L., Wei, K. C., & Chen, P. Y. (2022). Augmented Reality Surgical Navigation System for External Ventricular Drain. Healthcare, 10(10), 1815.
[4] Zhang, F., Lei, T., Li, J., Cai, X., Shao, X., Chang, J., & Tian, F. (2018). Real-time calibration and registration method for indoor scene with joint depth and color camera. International Journal of Pattern Recognition and Artificial Intelligence, 32(7), 1854021.
[5] Fu, L., Majeed, Y., Zhang, X., Karkee, M., & Zhang, Q. (2020). Faster R–CNN–based apple detection in dense-foliage fruiting-wall trees using RGB and depth features for robotic harvesting. Biosystems Engineering, 197, 245-256.
[6] Li, Y., He, L., Jia, J., Lv, J., Chen, J., Qiao, X., & Wu, C. (2021). In-field tea shoot detection and 3D localization using an RGB-D camera. Computers and Electronics in Agriculture, 185, 106149.
[7] Zhou, Z., Wu, B., Duan, J., Zhang, X., Zhang, N., & Liang, Z. (2017). Optical surgical instrument tracking system based on the principle of stereo vision. Journal of biomedical optics, 22(6), 065005.
[8] Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.
[9] Hernandez, D., Garimella, R., Eltorai, A. E., & Daniels, A. H. (2017). Computer‐assisted Orthopaedic Surgery. Orthopaedic Surgery, 9(2), 152-158.
[10] Fuentes-Pacheco, J., Ruiz-Ascencio, J., & Rendón-Mancha, J. M. (2015). Visual simultaneous localization and mapping: a survey. Artificial intelligence review, 43(1), 55-81.
[11] Geiger, A., Lenz, P., Stiller, C., & Urtasun, R. (2013). Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11), 1231-1237.
[12] Gomez-Ojeda, R., Moreno, F. A., Zuniga-Noël, D., Scaramuzza, D., & Gonzalez-Jimenez, J. (2019). PL-SLAM: A stereo SLAM system through the combination of points and line segments. IEEE Transactions on Robotics, 35(3), 734-746.
[13] Martínez-Corral, M., & Javidi, B. (2018). Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems. Advances in Optics and Photonics, 10(3), 512-566.
[14] Eitel, J. U., Höfle, B., Vierling, L. A., Abellán, A., Asner, G. P., Deems, J. S., & Vierling, K. T. (2016). Beyond 3-D: The new spectrum of lidar applications for earth and ecological sciences. Remote Sensing of Environment, 186, 372-392.
[15] Duan, X., Gao, L., Wang, Y., Li, J., Li, H., & Guo, Y. (2018). Modelling and experiment based on a navigation system for a cranio-maxillofacial surgical robot. Journal of Healthcare Engineering, 2018, 4670852.
[16] Northern Digital Incorporated 官方網站,取自https://www.ndigital.com/optical-measurement-technology/polaris-tools-and-accessories/
[17] Jiang, Q., Shao, F., Gao, W., Chen, Z., Jiang, G., & Ho, Y. S. (2018). Unified no-reference quality assessment of singly and multiply distorted stereoscopic images. IEEE Transactions on Image Processing, 28(4), 1866-1881.
[18] EDN Taiwan,3D視覺為機器人加上「眼睛」,取自https://www.edntaiwan.com/2019061
0nt31-3d-vision-gives-robots-guidance/
[19] Carranza-García, M., Torres-Mateo, J., Lara-Benítez, P., & García-Gutiérrez, J. (2020). On the performance of one-stage and two-stage object detectors in autonomous vehicles using camera data. Remote Sensing, 13(1), 89.
[20] Nick Bourdakos,Custom-Object-Detection,取自https://github.com/bourdakos1/Custom
-Object-Detection
[21] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. Proceedings of the IEEE conference on computer vision and pattern recognition, 779-788.
[22] Wang, C. Y., Liao, H. Y. M., Wu, Y. H., Chen, P. Y., Hsieh, J. W., & Yeh, I. H. (2020). CSPNet: A new backbone that can enhance learning capability of CNN. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 390-391.
[23] He, K., Zhang, X., Ren, S., & Sun, J. (2015). Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 37(9), 1904-1916.
[24] Roy, A. M., Bose, R., & Bhaduri, J. (2022). A fast accurate fine-grain object detection model based on YOLOv4 deep neural network. Neural Computing and Applications, 34(5), 3895-3921.
[25] Liu, S., Qi, L., Qin, H., Shi, J., & Jia, J. (2018). Path aggregation network for instance segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition, 8759-8768.
[26] 周威,YOLO V4-網路結構和損失函數解析,取自https://zhuanlan.zhihu.com/p/150
127712
[27] Björck, Å. (1990). Least squares methods. Handbook of numerical analysis, 1, 465-652.
[28] Intel® RealSense™ 官方網站,取自https://www.intelrealsense.com/lidar-camera-l515/
[29] Northern Digital Inc., Polaris Spectra Tool Kit Guide, Revision 1, August 2006.
[30] Intel RealSense Team, Intel® RealSense™ LiDAR Camera L515 Datasheet, Revision 003, January 2021
[31] Bi, S., Gu, Y., Zou, J., Wang, L., Zhai, C., & Gong, M. (2021). High precision optical tracking system based on near infrared trinocular stereo vision. Sensors, 21(7), 2528.
[32] Özgüner, O., Shkurti, T., Huang, S., Hao, R., Jackson, R. C., Newman, W. S., & Çavuşoğlu, M. C. (2020). Camera-robot calibration for the da vinci robotic surgery system. IEEE Transactions on Automation Science and Engineering, 17(4), 2154-2161.
[33] Su, Y., Gao, W., Liu, Z., Sun, S., & Fu, Y. (2020). Hybrid marker-based object tracking using Kinect v2. IEEE Transactions on Instrumentation and Measurement, 69(9), 6436-6445.
[34] Zhang, T., Wang, J., Song, S., & Meng, M. Q. H. (2022). Wearable Surgical Optical Tracking System Based on Multi-Modular Sensor Fusion. IEEE Transactions on Instrumentation and Measurement, 71, 1-11.
[35]衛生福利部,衛福部發布住院醫師工時指引,取自https://www.mohw.gov.tw/cp-2736-8859-1.html
[36] Gumprecht, H. K., Widenka, D. C., & Lumenta, C. B. (1999). Brain Lab VectorVision neuronavigation system: technology and clinical experiences in 131 cases. Neurosurgery, 44(1), 97-104.
[37] Ewurum, C. H., Guo, Y., Pagnha, S., Feng, Z., & Luo, X. (2018). Surgical navigation in orthopedics: workflow and system review. Intelligent Orthopaedics, 47-63.
[38] 余政叡,「整合擴增實境之內視鏡腦手術用導航系統研發」,國立中央大學,碩士論文,民國110年。
|