博碩士論文 111327019 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:32 、訪客IP:52.14.244.195
姓名 陳振榮(Zhen-Rong Chen)  查詢紙本館藏   畢業系所 光機電工程研究所
論文名稱 整合深度學習與機器視覺之微小電子連結器智慧整料系統開發
(Development of An Intelligent Material Handling System Integrating Deep Learning and Machine Vision for Micro Connector)
相關論文
★ 光學遮斷式晶圓定位系統與半導體製程設備之整合★ 應用於太陽能聚光器之等光路型與金字塔型二次光學元件的分析與比較
★ 口徑550 mm反射鏡減重與撓性支撐結構最佳化設計★ 光機整合分析應用於620mm反射鏡變形分析與八吋反射鏡彈性膠緊固設計
★ 具線性齒頂修整之螺旋齒輪接觸特性研究★ 應用投射疊紋技術於齒輪精度量測
★ 反射鏡減重與撓性支撐結構最佳化★ 曲面反射鏡減重與背向支撐撓性機構最佳化
★ 建構拉焊機感測系統之人機介面與機器學習★ 考量成像品質之最佳化塑膠透鏡結構設計
★ 離軸矩形反射鏡輕量化與撓性支撐結構最佳化★ 電路板拉焊製程參數優化與 烙鐵頭剩餘使用壽命預測之研究
★ ZK型雙包絡蝸桿蝸輪組接觸分析★ 整合深度學習與立體視覺之六軸機械手臂夾取系統開發
★ 整合光源控制與深度學習辨識之平放膠體散料夾取系統開發★ 整合視覺及力量控制之六軸機械手臂系統開發
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 隨著AI技術的進步,智慧化機器人技術已然成為趨勢,它能夠擁有如同人類般的感知能力,以便執行更加複雜的任務與目標,電子廠內,對於電子零件所導入的自動化產線已相當成熟,但對於少量多樣的電子連結器裝配任務,建置其自動化產線與振動盤相對耗費成本,故本研究旨於打造一套使用機械手臂、機器視覺及深度學習的散料夾取系統,以適應微小電子連結器的複雜特徵與環境,將其嵌入本系開發之類產線架構中,成功實現了彈性產線,解決自動化產線與振動盤的高昂成本及缺工問題。
本論文使用YOLOv8物件偵測模型,並以資料擴增(Data augmentation)的方法,以當前環境亮度±25%之光源進行訓練,進而提升穩健性,物件偵測模型會自動產生許多的候選框(Boundingbox),透過Non Maximum Suppression(NMS)方法,濾除多餘的候選框,為了提升組裝成功率,本研究設計了用於整理微小電子連結器姿態的轉接治具,根據物件框選中心與影像質心位置計算其夾取之開口方向,由於組裝插件有方向性需求,故需以開口向上、下以及正反面,四種特徵設定其放料姿態,置放於轉接治具中,接著會等待轉盤的訊號收發,將轉接治具上的電子連結器夾持至插端子治具上,為了提升組裝準確率,亦開發了一套標定演算法與標定治具,嵌入於插入微小電子連結器的治具上,以判定組裝的插槽實際位置,完成後,透過IO訊號發送任務結束訊號給轉盤,以切換下一組治具,達成自動化彈性產線。
摘要(英) With the advancement of AI technology, intelligent robotic system have become a fashion trend. These systems are capable of processing human-like perception, allowing them to perform more complex tasks and objectives. In electronic factories, the automation of production lines for electronic components has become quite mature. However, for the assembly tasks of a variety of electronic connectors in small quantities, the cost of building automated production lines and vibration feeders is quite high. Therefore, this study aims to develop a grasping system using robotic arms, machine vision, and deep learning to adapt to the complex features and environment of micro electronic connectors. This system is integrated into the production line framework developed in this study, successfully achieving a flexible production line and address the high costs of traditional automated lines and vibration feeders, as well as labor shortages.
This paper uses the YOLOv8 object detection model and applies data argumentation techniques to train the model under varying lighting conditions with brightness adjustments, thereby enhancing robustness. The object detection model automatically generates multiple candidate bounding boxes. These are filtered by the Non-Maximum Suppression(NMS) method to remove redundant boxes. To improve the assembly success rate, this study designs an adapter fixture to arrange the pose of micro connectors. The gripper orientation is determined by calculating the direction of the opening based on the center of the bounding box and the image centroid. Since the assembly of the connectors requires a specific orientation, the object is placed with the opening facing up, down, or either the front or back, setting four distinct features for its pose within the adapter fixture. Then, the system waits for the signal from the rotary table to transfer the electronic connector from the adapter fixture to the insertion fixture. To enhance assembly accuracy, a calibration algorithm and calibration fixture are developed and integrated into the insertion fixture to determine the actual position of the assembly groove. Once completed, an IO signal is sent to the rotary table to confirm end the task, prompting the switch to the next fixture, thus achieving an automated, flexible production line.
關鍵字(中) ★ 微小電子零件辨識
★ YOLOv8
★ 機器視覺
★ 六軸機械手臂
★ ROS(Robotic Operation System)
★ 物件偵測夾取系統
關鍵字(英) ★ Recognition of micro connectors
★ YOLOv8
★ Machine vision
★ Six-axis robot arm
★ ROS(Robotic Operation System)
★ Object detection grasping system
論文目次 目錄
摘要 I
Abstract II
致謝 IV
目錄 V
圖目錄 VII
表目錄 X
第1章 緒論 1
1.1 研究背景 1
1.2 文獻回顧 1
1.2.1 物件偵測 1
1.2.2 物件夾取 3
1.3 研究動機與目的 6
第2章 系統架構 8
2.1 硬體規格 9
2.2 軟體介紹 11
2.2.1 機器人作業系統(Robot Operation System, ROS) 11
2.2.2 YOLOv8 13
第3章 研究方法 20
3.1 膠體散料偵測與姿態放置 20
3.1.1 資料集建立與標註 21
3.1.2 模型訓練參數設定 23
3.1.3 模型訓練 24
3.1.4 辨識結果處理 26
3.2 影像處理方法 28
3.2.1 灰階化 28
3.2.2 中值濾波 28
3.2.3 二值化 29
3.2.4 侵蝕 30
3.2.5 膨脹 30
3.2.6 邊緣偵測 30
3.2.7 最小外接矩形 32
3.3 手眼校正(Hand-Eye Calibration) 32
3.4 系統優化與整合 39
3.4.1 資料集前處理 41
3.4.2 流程設計與治具優化 41
3.4.3 組裝標定演算法 46
3.4.4 機器人作業系統(ROS)整合應用 48
第4章 實驗結果 51
4.1 YOLOv8模型辨識實驗 51
4.1.1 模型訓練指標比較 51
4.1.2 模型穩健性評估 55
4.2 散料組裝系統實驗 60
4.2.1 膠體整料速度與成功率評估 60
4.2.2 膠體組裝速度與成功率評估 63
4.2.3 整體組裝速度與成功率評估 66
第5章 結論與未來工作 69
5.1 結論 69
5.2 未來展望 69
參考文獻 71
參考文獻 [1] D. G. Lowe, "Object recognition from local scale-invariant features," in Proceedings of the Seventh IEEE International Conference on Computer Vision, 20-27 Sept. 1999 1999, vol. 2, pp. 1150-1157 vol.2, doi: 10.1109/ICCV.1999.790410.
[2] E. Rosten and T. Drummond, "Fusing points and lines for high performance tracking," in Tenth IEEE International Conference on Computer Vision (ICCV′05) Volume 1, 17-21 Oct. 2005 2005, vol. 2, pp. 1508-1515 Vol. 2, doi: 10.1109/ICCV.2005.104.
[3] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, "Speeded-Up Robust Features (SURF)," Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346-359, 2008/06/01/ 2008, doi: https://doi.org/10.1016/j.cviu.2007.09.014.
[4] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, "ORB: An efficient alternative to SIFT or SURF," in 2011 International Conference on Computer Vision, 6-13 Nov. 2011 2011, pp. 2564-2571, doi: 10.1109/ICCV.2011.6126544.
[5] A. Krizhevsky, I. Sutskever, and G. Hinton, "ImageNet Classification with Deep Convolutional Neural Networks," Neural Information Processing Systems, vol. 25, 01/01 2012, doi: 10.1145/3065386.
[6] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation," in 2014 IEEE Conference on Computer Vision and Pattern Recognition, 23-28 June 2014 2014, pp. 580-587, doi: 10.1109/CVPR.2014.81.
[7] J. Uijlings, K. Sande, T. Gevers, and A. W. M. Smeulders, "Selective Search for Object Recognition," International Journal of Computer Vision, vol. 104, pp. 154-171, 09/01 2013, doi: 10.1007/s11263-013-0620-5.
[8] R. Girshick, "Fast R-CNN," in 2015 IEEE International Conference on Computer Vision (ICCV), 7-13 Dec. 2015 2015, pp. 1440-1448, doi: 10.1109/ICCV.2015.169.
[9] S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, 2017, doi: 10.1109/TPAMI.2016.2577031.
[10] J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 7-12 June 2015 2015, pp. 3431-3440, doi: 10.1109/CVPR.2015.7298965.
[11] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 27-30 June 2016 2016, pp. 779-788, doi: 10.1109/CVPR.2016.91.
[12] C. Guo, X.-l. Lv, Y. Zhang, and M.-l. Zhang, "Improved YOLOv4-tiny network for real-time electronic component detection," Scientific Reports, vol. 11, 11/23 2021, doi: 10.1038/s41598-021-02225-y.
[13] G. Oh and S. Lim, "One-Stage Brake Light Status Detection Based on YOLOv8," Sensors, vol. 23, p. 7436, 08/25 2023, doi: 10.3390/s23177436.
[14] S. Yang, W.-T. Xiao, M. Zhang, S. Guo, J. Zhao, and S. Furao, "Image Data Augmentation for Deep Learning: A Survey," ArXiv, vol. abs/2204.08610, 2022.
[15] I. M. Chen and J. W. Burdick, "Finding antipodal point grasps on irregularly shaped objects," IEEE Transactions on Robotics and Automation, vol. 9, no. 4, pp. 507-512, 1993, doi: 10.1109/70.246063.
[16] J. Yun, S. Moseson, and A. Saxena, "Efficient grasping from RGBD images: Learning using a new rectangle representation," in 2011 IEEE International Conference on Robotics and Automation, 9-13 May 2011 2011, pp. 3304-3311, doi: 10.1109/ICRA.2011.5980145.
[17] I. Lenz, H. Lee, and A. Saxena, "Deep learning for detecting robotic grasps," The International Journal of Robotics Research, vol. 34, pp. 705 - 724, 2013.
[18] Y.-J. Chiu, Y.-Y. Yuan, and S.-R. Jian, "Design of and research on the robot arm recovery grasping system based on machine vision," Journal of King Saud University - Computer and Information Sciences, vol. 36, no. 4, p. 102014, 2024/04/01/ 2024, doi: https://doi.org/10.1016/j.jksuci.2024.102014.
[19] J. Terven, D.-M. Cordova-Esparza, and J.-A. Romero-Gonzalez, "A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS," Machine Learning and Knowledge Extraction, vol. 5, no. 4, pp. 1680-1716, 2023. [Online]. Available: https://www.mdpi.com/2504-4990/5/4/83.
[20] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, "Path Aggregation Network for Instance Segmentation," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18-23 June 2018 2018, pp. 8759-8768, doi: 10.1109/CVPR.2018.00913.
[21] T. Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature Pyramid Networks for Object Detection," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 21-26 July 2017, pp. 936-944, doi: 10.1109/CVPR.2017.106.
[22] X. Wang and J. Song, "ICIoU: Improved Loss Based on Complete Intersection Over Union for Bounding Box Regression," IEEE Access, vol. 9, pp. 105686-105695, 2021, doi: 10.1109/ACCESS.2021.3100414.
[23] X. Li et al., "Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection," ArXiv, vol. abs/2006.04388, 2020.
[24] H. V. Koay, J. H. Chuah, C. O. Chow, Y.-L. Chang, and K. Yong, "YOLO-RTUAV: Towards Real-Time Vehicle Detection through Aerial Images with Low-Cost Edge Devices," Remote Sensing, vol. 13, 10/20 2021, doi: 10.3390/rs13214196.
[25] 陳尚宏,「整合光源控制與深度學習辨識之平放散料夾取系統開發」。碩士論文,機械工程學系,國立中央大學,2023。
[26] TM5-900產品型錄
https://www.tm-robot.com/zh-hant/tm5-900/,2023年7月
[27] TM-github
https://github.com/TechmanRobotInc/tmr_ros1/tree/noetic,2023年7月
[28] Robotiq
https://robotiq.com/,2023年7月
[29] ROS
http://wiki.ros.org/ROS/Tutorials,2023年6月
[30] Moveit
https://moveit.github.io/moveit_tutorials/,2023年6月
[31] Roboflow
https://roboflow.com/,2023年10月
[32] Camera Calibration and 3D Reconstruction, OpenCV
https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html,2017年9月
[33] Ultralytics
https://github.com/ultralytics/ultralytics,2023年12月
[34] [Object detection] YOLOv8詳解
https://henry870603.medium.com/object-detection-yolov8%E8%A9%B3%E8%A7%A3-fdf8874e5e99,2024年1月
[35] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2 ed. Cambridge: Cambridge University Press, 2004.
[36] W. K. Pratt and J. E. Adams, "Digital Image Processing, 4th Edition," J. Electronic Imaging, vol. 16, p. 029901, 2007.
指導教授 陳怡呈(Yi-Cheng Chen) 審核日期 2025-2-25
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明