參考文獻 |
1. M. N. Popa and E. Balan, “Particularities of the Painting Process in the Automotive Industry,” Conference Proceedings of the Academy of Romanian Scientists, Vol. 12, pp. 115-128, 2020.
2. N. K. Akafuah, S. Poozesh, A. Salaimeh, G. Patrick, K. Lawler, and K. Saito, “Evolution of the Automotive Body Coating Process—A Review,” Coatings, Vol. 2, 6020024, 2016.
3. B. Zhang, J. Wu, L. Wang, and Z. Yu, “Accurate Dynamic Modeling and Control Parameters Design of an Industrial Hybrid Spray-Painting Robot,” Robotics and Computer Integrated Manufacturing, Vol. 63, 101923, 2020.
4. 國瑞汽車股份有限公司提供。
5. M. Franzo, A. Pica, S. Pascucci, F. Marinozzi, and F. Bini, “Hybrid System Mixed Reality and Marker-Less Motion Tracking for Sports Rehabilitation of Martial Arts Athletes,” Applied Sciences, Vol. 13, pp. 2587-2600, 2023.
6. J. Yu, K. Weng, G. Liang, and G. Xie, “A Vision-Based Robotic Grasping System Using Deep Learning for 3D Object Recognition and Pose Estimation,” 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1175-1180, Shenzhen, China, 2013.
7. M. Seker, A. Mannisto, A. Iosifidis, and J. Raitoharju, “Automatic Social Distance Estimation from Images: Performance Evaluation, Test Benchmark, and Algorithm,” Machine Learning with Applications, Vol. 10, 100427, 2021.
8. W. Yan, Z. Xu, X. Zhou, Q. Su, S. Li, and H. Wu, “Fast Object Pose Estimation Using Adaptive Threshold for Bin-Picking,” IEEE Access, Vol. 8, pp. 63055-63064, 2020.
9. J. Wang, Z. Gao, Y. Zhang, J. Zhou, J. Wu, and P. Li, “Real-Time Detection and Location of Potted Flowers Based on a ZED Camera and a YOLO V4-Tiny Deep Learning Algorithm,” Horticulturae, Vol. 8, 21, 2022.
10. X. Gao, Q. Lizhe, M. Chuangjia, and Y. Sun, “Research on Real-Time Cloth Edge Extraction Method Based on ENet Semantic Segmentation,” Journal of Engineered Fibers and Fabrics, Vol. 17, pp. 1-11, 2022.
11. Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3D ShapeNets: A Deep Representation for Volumetric Shapes,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1912-1920, Boston, MA, USA, 2015.
12. A. Mousavian, D. Anguelov, J. Flynn, and J. Kosecka, “3D Bounding Box Estimation Using Deep Learning and Geometry,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7074-7082, Honolulu, HI, USA, 2017.
13. C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on Point Sets for 3D Classification and Segmentation,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 77-85, Honolulu, HI, USA, 2017.
14. C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space,” Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 5105-5114, Long Beach, CA, USA, 2017.
15. Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox, “PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes,” CoRR, Vol. 1711, 00199, 2017.
16. M. Rad and V. Lepetit, “BB8: A Scalable, Accurate, Robust to Partial Occlusion Method for Predicting The 3D Poses of Challenging Objects Without Using Depth,” 2017 IEEE International Conference on Computer Vision (ICCV), pp. 3848-3856, Venice, Italy, 2017.
17. W. Kehl, F. Manhardt, F. Tombari, S. Ilic, and N. Navab, “SSD-6D: Making RGB-Based 3D Detection and 6D Pose Estimation Great Again,” 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1530-1538, Venice, Italy, 2017.
18. Y. Li, G. Wang, X. Ji, Y. Xiang, and D. Fox, “DeepIM: Deep Iterative Matching for 6D Pose Estimation,” International Journal of Computer Vision, Vol. 128, pp. 657-678, 2020.
19. C. Wang, D. Xu, Y. Zhu, M.-M. Roberto, C. Lu, F.-F. Li, and S. Savarese, “Densefusion: 6D Object Pose Estimation by Iterative Dense Fusion,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3338-3347, Long Beach, CA, USA, 2019.
20. M. Fu and W. Zhou, “Deephmap++: Combined Projection Grouping and Correspondence Learning for Full Dof Pose Estimation,” Sensors, Vol. 19, pp. 1032-1050, 2019.
21. Z. Zou, K. Chen, Z. Shi, Y. Guo, and J. Ye, “Object Detection in 20 Years: A Survey,” Proceedings of the IEEE, Vol. 111, pp. 257-276, 2023.
22. R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Region-Based Convolutional Networks for Accurate Object Detection and Segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 38, pp. 142-158, 2016.
23. S. Minaee, Y. Boykov, F. Porikli, A. Plaza, N. Kehtarnavaz, and D. Terzopoulos, “Image Segmentation Using Deep Learning: A Survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 44, pp. 3523-3542, 2022.
24. A. Kirillov, K. He, R. Girshick, C. Rother, and P Dollar, “Panoptic Segmentation,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9396-9405, Long Beach, CA, USA, 2018.
25. Y. Li, S. Wang, Q. Tian, and X. Ding, “A Survey of Recent Advances in Visual Feature Detection,” Neurocomputing, Vol. 149, pp. 736-751, 2015.
26. W. Yin, H. Wen, Z. Ning, J. Ye, Z. Dong, and L. Luo, “Fruit Detection and Pose Estimation for Grape Cluster-Harvesting Robot Using Binocular Imagery Based on Deep Neural Networks,” Frontiers in Robotics and AI, Vol. 8, 626989, 2021.
27. Medium, [物件偵測] S9: Mask R-CNN 簡介, https://ivan-eng-murmur.medium.com/物件偵測-S9-Mask-R-CNN-簡介-99370c98de28, accessed on November 25, 2024.
28. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, Las Vegas, NV, USA, 2016.
29. S. Xie, R. Girshick, P. Dollar, Z. Tu, and K. He, ”Aggregated Residual Transformations for Deep Neural Networks,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1492-1500, Honolulu, HI, USA, 2017.
30. T. Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, ”Feature Pyramid Networks for Object Detection,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2117-2125, Honolulu, HI, USA, 2017.
31. iT 邦幫忙,【Day 8】物件偵測的標準 IoU (Intersection over Union), https://ithelp.ithome.com.tw/m/articles/10350081, accessed on November 25, 2024.
32. K. He, G. Gkioxari, and P. Dollar, “Mask R-CNN,” Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 2961-2969, Venice, Italy, 2017.
33. J. Long, E. Shelhamer, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431-3440, Boston, MA, USA, 2015.
34. CSDN, MaskRCNN源碼解析5:損失部分解析, https://blog.csdn.net/sxlsxl119/article/details/103433078, accessed on November 25, 2024.
35. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 39, pp. 1137-1149, 2017.
36. R. Girshick, “Fast R-CNN,” IEEE International Conference on Computer Vision (ICCV), pp. 1440-1448, Santiago, Chile, 2015.
37. Medium, [物件偵測] S2: Fast R-CNN 簡介, https://ivan-eng-murmur.medium.com/obeject-detection-s2-fast-rcnn-簡介-40cfe7b5f605, accessed on November 25, 2024.
38. Stack overflow, What is the Loss Function of the Mask RCNN?, https://stackoverflow.com/questions/46272841/what-is-the-loss-function-of-the-mask-rcnn, accessed on November 25, 2024.
39. R. Bai, M. Wang, Z. Zhang, J. Lu, and F. Shen, “Automated Construction Site Monitoring Based on Improved YOLOv8-seg Instance Segmentation Algorithm,” IEEE Access, Vol. 11, pp. 139082-139096, 2023.
40. CSDN, YOLOv8 is Here | Detailed Interpretation of the Improved Modules in YOLOv8! YOLOv5 Officially Releases YOLOv8!, https://blog.csdn.net/qq_40716944/article/details/128609569, accessed on December 6, 2024.
41. Medium, [Object detection] YOLOv8詳解, https://henry870603.medium.com/object-detection-yolov8詳解-fdf8874e5e99, accessed on December 6, 2024.
42. CSDN, 萬字詳解 YOLOv8 網路結構:Backbone、Neck、Head,以及 Conv、Bottleneck、C2f、SPPF、Detect 等模組, https://blog.csdn.net/shangyanaf/article/details/139223155, accessed on December 6, 2024.
43. CSDN, YOLOv8 優化:卷積變體——分布移位卷積(DSConv),提升卷積層的內存效率與速度, https://blog.csdn.net/m0_63774211/article/details/130408988, accessed on December 6, 2024.
44. M. G. D. Nascimento, R. Fawcett, and V. A. Prisacariu, “DSConv: Efficient Convolution Operator,” Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5148-5157, Seoul, Korea, 2019.
45. S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path Aggregation Network for Instance Segmentation,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8759-8768, Salt Lake City, UT, USA, 2018.
46. CSDN, YOLOv8 的 C2f 模組—與 YOLOv5 的 C3 模組對比, https://blog.csdn.net/python_plus/article/details/129223831, accessed on December 6, 2024.
47. M. Yaseen, “What is YOLOv8: An In-Depth Exploration of the Internal Features of the Next-Generation Object Detector,” 10.48550/arXiv.2408.15857, 2024.
48. J. Terven, D.-M. Cordova-Esparza, and J.-A. Romero-Gonzalez, “A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS,” Machine Learning and Knowledge Extraction, Vol. 5, pp. 1680-1716, 2023.
49. Github, Loss Function Explanation, https://github.com/ultralytics/ultralytics/issues/10465, accessed on December 6, 2024.
50. H. Zhang, Y. Wang, F. Dayoub, and N. Sunderhauf, “VarifocalNet: An IoU-aware Dense Object Detector,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, pp. 8514-8523, 2021.
51. X. Li, W. Wang, L. Wu, S. Chen, X. Hu, J. Li, and J. Yang, “Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection,” Advances in Neural Information Processing Systems, Vol. 33, pp. 21002-21012, 2020.
52. Z. Zheng, P. Wang, W. Liu, J. Li, R. Ye, and D. Ren, “Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression,” Proceedings of the AAAI Conference on Artificial Intelligence,” pp. 12993-13000, Washington, DC, USA, 2020.
53. F. Milletari, N. Navab, and S. A. Ahmadi, “V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation,” 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, pp. 565-571, 2016.
54. Roboflow, Roboflow, https://roboflow.com, accessed on December 11, 2024.
55. CSDN, OpenCV筆記:去噪函數—cv2.fastNlMeansDenoising, https://blog.csdn.net/qq_38410428/article/details/93046099, accessed on December 11, 2024.
56. CSDN, Python 去噪函數:OpenCV Python 圖像去噪, https://blog.csdn.net/weixin_40001309/article/details/110326001, accessed on December 11, 2024.
57. CSDN, OpenCV 學習—cv2.findContours() 函數講解 (Python), https://blog.csdn.net/weixin_44690935/article/details/109008946, accessed on December 11, 2024.
58. CSDN, Python + OpenCV 利用函數 cv2.findContours() 和 cv2.drawContours 查找並繪製輪廓, https://blog.csdn.net/weixin_42216109/article/details/89840323, accessed on December 11, 2024.
59. StereoLabs, Depth Settings, https://www.stereolabs.com/docs/depth-sensing/depth-settings, accessed on December 12, 2024.
60. OpenCV, Camera Calibration and 3D Reconstruction, https://docs.opencv.org/4.5.5/d9/d0c/group__calib3d.html, accessed on January 14, 2025.
61. StereoLabs, How can I convert 3D world coordinates to 2D image coordinates and viceversa?, https://support.stereolabs.com/hc/en-us/articles/4554115218711-How-can-I-convert-3D-world-coordinates-to-2D-image-coordinates-and-viceversa, accessed on January 14, 2025.
62. Meshlogic, Fitting a Circle to Cluster of 3D Points, https://meshlogic.github.io/posts/jupyter/curve-fitting/fitting-a-circle-to-cluster-of-3d-points/, accessed on January 14, 2025. |