博碩士論文 109581005 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:48 、訪客IP:3.145.100.179
姓名 陳冠霖(Kuan-Lin Chen)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 羽球自動收集與檢測之智慧機器人
相關論文
★ 直接甲醇燃料電池混合供電系統之控制研究★ 利用折射率檢測法在水耕植物之水質檢測研究
★ DSP主控之模型車自動導控系統★ 旋轉式倒單擺動作控制之再設計
★ 高速公路上下匝道燈號之模糊控制決策★ 模糊集合之模糊度探討
★ 雙質量彈簧連結系統運動控制性能之再改良★ 桌上曲棍球之影像視覺系統
★ 桌上曲棍球之機器人攻防控制★ 模型直昇機姿態控制
★ 模糊控制系統的穩定性分析及設計★ 門禁監控即時辨識系統
★ 桌上曲棍球:人與機械手對打★ 麻將牌辨識系統
★ 相關誤差神經網路之應用於輻射量測植被和土壤含水量★ 三節式機器人之站立控制
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本論文旨在設計一個羽球自動收集與檢測系統,此系統使用無人搬運車透過影像辨識將羽球場上的羽球撿取到車上載回到六軸機械手臂的位置附近基地,然後人工拿到機器手底下平台,再透過六軸機械手臂夾取羽球到攝影機辨識羽球的完好度,從而進行好壞球的分類。
本論文之研究項目如下,在無人搬運車收集羽球的部分,透過裝置在無人搬運車上的網路攝影機完成以下三點:(1)使用深度學習偵測與辨識羽球,(2)透過相機針孔模型演算法計算出目標物品與相機之間的相對位置,移動無人車到目標物品位置,並透過ROS控制馬達將羽球收集到無人搬運車上,(3)使用AprilTag引導無人搬運車回到基地。在羽球影像方面,則透過安裝在機械手臂末端的深度攝影機以及檢測羽球使用的網路攝影機所輸出的影像完成以下四個工作:(1)使用深度學習網路偵測羽球球頭以及中心的位置以及角度,(2)計算出目標物品與相機之間的相對位置,(3)使用深度學習網路與影像處裡將羽球進行完好度的分析。另外在機械手臂的運動控制方面,完成以下程序。(1)建置虛擬環境,(2)計算機器手臂運作模型的轉換矩陣,(3)求得羽球球頭目標點的座標,並且以逆運動學控制機械手臂到羽球球頭的目標點。綜合上述條件,便可以讓無人車完成羽球收集與機械手臂可以完成夾取羽球分類的兩個任務。
本研究在Linux環境下使用機器人作業系統(Robot Operating System, ROS)開發軟體系統,透過ROS分散式的架構與點對點網路,將所有資訊收集在一起並進行資料傳遞並整合,實現軟體硬體協同的設計。本論文在實際羽球場上實驗中,無人搬運車可以收集球場上羽球,機械手臂正確夾取率為92.1%,羽球分類正確率則為82%,成果顯示本論文確實能成功建立了一套撿拾並分類羽球的系統。
摘要(英) The thesis aims to design a shuttlecock automatic collecting and checking system. At first, the shuttlecocks on the court are detected and picked up by the AGV (Automated Guided Vehicle) and then brought back to the base. After collecting all shuttlecocks, the six degrees of freedom (6DOF) robot arm picked up each and identified its integrity.
The research topics of this thesis are described as follows: At the part of AGV collecting the shuttlecocks and through the monocular vision from the Web camera which is mounted on the AGV, we complete (1) detecting and identifying shuttlecocks, (2) calculating the relative position between the target object and the camera and (3) guiding the AGV back to the base based on the AprilTag recognition. At the part of the shuttlecocks image, based on the images from a depth camera installed at the end of the robotic arm and the webcam, we complete (1) using a deep learning technique to calculate the position and angle of the shuttlecock’s head and body center, (2) calculating the relative position between the shuttlecock and the camera, (3) measuring the integrity of the shuttlecock. In addition, the following procedures are required to be complete regarding motion control of the robotic arm. (1) build a virtual environment, (2) calculate the transformation matrix of the robot arm operation, (3) obtain the coordinates of the shuttlecock’s head, and control the robot arm to the target point of the shuttlecock head with inverse kinematics. After all, the AGV can complete shuttlecocks collection, and the robotic arm can complete shuttlecocks picking up and integrity checking.
This thesis uses the Robot Operating System (ROS) to develop a software system in the Linux environment. Through the distributed architecture of ROS and the peer-to-peer network, all information is collected, transmitted, and integrated to achieve the design of software and hardware collaboration. In the experiment of this study on the actual badminton court, the AGV can collect all the shuttlecocks on the court, and the accuracy rates of the robot arm clamping and classification are 92.5% and 83%, respectively. It is concluded that the thesis establishes the system that can pick up the shuttlecocks on the court and identify the shuttlecock’s integrity.
關鍵字(中) ★ 羽球偵測
★ 羽毛完整度辨識分析
★ 六軸機械手臂
★ 座標轉換
★ 運動學
★ ROS
★ 無人自走車
關鍵字(英) ★ shuttlecock
★ Automated Guided Vehicle
★ robot operating system
★ coordinate transformation
★ 6 DOF robotic arm
論文目次 摘要 i
Abstract ii
致謝 iii
目錄 iv
圖目錄 vi
表目錄 ix
第一章 緒論 1
1.1 研究背景與動機 1
1.2 文獻回顧 1
1.3 論文目標 3
1.4 論文架構 3
第二章 系統架構與軟硬體介紹 4
2.1 系統架構 4
2.2 硬體架構 5
2.2.1 無人搬運車端 5
2.2.2 機械手臂端 8
2.3軟體介紹 12
2.3.1 ROS簡介 13
2.3.2 Moveit套件介紹 15
第三章 羽球偵測與完整度辨識 17
3.1 羽球偵測與辨識網路 17
3.1.1 無人搬運車偵測羽球 17
3.1.2 機械手臂偵測羽球 18
3.1.3 辨識羽球完整度 18
3.2 隨機抽樣一致圓擬合法 23
3.3 透視投影法與針孔投影法 21
3.2.1 透視投影法 233
3.2.2 針孔投影法 24
第四章 機械手臂及無人搬運車應用 26
4.1 無人車功能介紹 26
4.2 機械手臂運動學與其應用 28
4.2.1 座標轉換 28
4.2.2 逆向運動學介紹 30
4.3 機器人作業系統的應用 34
4.3.1 無人搬運車端ROS節點功能說明 34
4.3.2 無人搬運車端實驗主題流程 38
4.3.3 六軸機械手臂端ROS節點功能說明 39
4.3.4 六軸機械手臂端實驗主題流程 43
第五章 實驗結果 45
5.1 無人搬運車實驗成果 45
5.2 機械手臂實驗成果 47
5.2.1 實驗成果介紹 42
5.2.2 機械手臂夾取分類準確率統計 49
5.2.3 問題與改進 50
第六章 結論與未來展望 54
參考文獻 55

參考文獻 [1] 羽球撿球機-https://www.bilibili.com/s/video/BV1gZ4y197sM,2022年5月。
[2] M. Iqbal and R. Omar, "Automatic Guided Vehicle (AGV) Design Using an IoT-based RFID for Location Determination,"2020 International Conference on Applied Science and Technology (iCAST),Padang,Oct.2020, pp. 489-494.
[3] S. Quan and J. Chen, "AGV Localization Based on Odometry and LiDAR," 2019 2nd World Conference on Mechanical Engineering and Intelligent Manufacturing (WCMEIM), Shanghai, Nov. 2019.
[4] R. Chakma et al., "Navigation and Tracking of AGV in ware house via Wireless Sensor Network," 2019 IEEE 3rd International Electrical and Energy Conference (CIEEC), Beijing, Sep.2019, pp. 1686-1690.
[5] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," Proc. IEEE Conference Computer Vision and Pattern Recognition, Columbus ,Jun. 2014, pp. 580-587.
[6] R. B. Girshick, "Fast R-CNN," Proc. International Conference on Computer Vision Pattern Recognition, Santiago, Dec. 2015, pp. 1440-1448.
[7] S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: towards real time object detection with region proposal networks," Proc. IEEE Transactions on Pattern Analysis Machine Intelligence, 2017, vol. 39, pp. 1137-1149.
[8] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: unified, real-time object detection," arXiv preprint, arXiv:1506.02640,2015.
[9] G. S. Jocher, A.; Borovec, J.; NanoCode012; ChristopherSTAN; Changyu, L.; Laughing; tkianai; yxNONG; Hogan, A.; et al., "ultralytics/yolov5," 2022,
doi: https://doi.org/10.5281/zenodo.3908559
[10] L. Jianguo, L Weidong, G, Li-e, and L. Le, "Detection and localization of underwater targets based on monocular vision," in Proc. The 2nd International Conference on Advanced Robotics and Mechatronics, Hefei, Aug. 2017, pp. 100-105.
[11] X. Li, and Lu Wang, "A monocular distance estimation method used in video sequence," in Proc. International Conference on Information and Automation, Shenyang, Jun. 2012, pp. 390-394
[12] D. Bao and P. Wang, "Vehicle distance detection based on monocular vision," in Proc. IEEE International Conference Progress in Informatics and Computing, Shanghai, Dec. 2016, pp. 187-191.
[13] Robot end effector – Wikipedia, June 2019.
Available at : https://en.wikipedia.org/wiki/Robot_end_effector
[14] A. Khan, C. Xiangming, Z. Xingxing and W. L. Quan, "Closed form inverse kinematics solution for 6-DOF underwater manipulator," in Proc. International Conference on Fluid Power and Mechatronics (FPM), Harbin ,2015, pp. 1171-1176.
[15] J.-J. Kim and J.-J. Lee, "Trajectory optimization with particle swarm optimization for manipulator motion planning," in Proc. IEEE Transactions on Industrial Informatics, vol. 11, pp. 620-631, no, Mar. 2015.
[16] P. Beeson and B. Ames, "TRAC-IK: An open-source library for improved solving of generic inverse kinematics," IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, 2015, pp. 928-935.
[17] S. Kumar, N. Sukavanam and R. Balasubramanian, "An optimization approach to solve the inverse kinematics of redundant manipulator, " International Journal of Information and System Sciences (Institute for Scientific Computing and Information), vol. 6, no. 4, pp. 414-423, no,2010.
[18] J. Vannoy and J. Xiao, "Real-time adaptive motion planning (RAMP) of mobile manipulators in dynamic environments with unforeseen changes," in Proc. IEEE Transactions on Robotics, vol. 24, pp. 1199-1212, no, Oct. 2008.
[19] J. J. Kuffner Jr and S. M. LaValle, "RRT-Connect: An efficient approach to single-query path planning," in Proc. IEEE International Conference on Robotics and Automation, San Francisco, Aug. 2000, pp. 995-1001.
[20] I. H. Choi and Y. G. Kim, "Head pose and gaze direction tracking for detecting a drowsy driver," International Conference on Big Data and Smart Computing, Bangkok, 2014, pp. 241-244.
[21] P. I. Corke, "A Simple and Systematic Approach to Assigning Denavit–Hartenberg Parameters," in IEEE Transactions on Robotics, vol. 23, no. 3, pp. 590-594, June 2007, doi: 10.1109/TRO.2007.896765.
[22] NVIDIA® Jetson AGX Xavier
https://www.nvidia.com/zh-tw/autonomous-machines/embedded-systems/jetson-agx-xavier/ ,2022年6月
[23] ROW0146
https://shop.playrobot.com/products/robot-row0146,2022年6月。
[24] C310-Webcam
https://www.logitech.com/zh-tw/products/webcams/c310-hd-webcam.960-000631.html,2022年6月。
[25] AX-12 Motor
https://emanual.robotis.com/docs/en/dxl/ax/ax-12a/,2022年6月。
[26] Cjscope-RZ760
https://www.cjscope.com.tw/product/detail/117,2022年6月
[27] 達明機器人
https://www.valin.com/sites/default/files/asset/document/Omron-Collaborative-Robots-TM5-Series-Datasheet.pdf,2022年6月。
[28] Intel® RealSense™ Depth Camera D435i
https://www.intelrealsense.com/zh-hans/depth-camera-d435i/,2022年6月。
[29] Logitech-c920
https://www.logitech.com/zh-tw/products/webcams/c920e-business-webcam.960-001360.html,2022年6月。
[30] Ros
http://wiki.ros.org/ROS/Tutorials,2022年6月。
[31] Moveit
https://ros-planning.github.io/moveit_tutorials/,2022年6月。
[32] YOLOv5
https://docs.ultralytics.com/,2022年6月。
[33] Yolov5-Train Custom Data
https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data,2022年6月。
[34] RolabelImg -Github
https://github.com/cgvict/roLabelImg,2022年6月。
[35] Cantzler, H. "Random sample consensus (ransac)." Institute for Perception, Action and Behaviour, Division of Informatics, University of Edinburgh,1981
[36] Canny edge detector
https://en.wikipedia.org/wiki/Canny_edge_detector , 2022 年 6 月
[37] Camera Calibration and 3-D Vision - MATLAB & Simulink https://www.mathworks.com/help/vision/ref/cameracalibrator-app.html , 2022 年 6 月。
[38] Z. Zhang, "A flexible new technique for camera calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence, Nov. 2000, vol. 22, no. 11, pp. 1330-1334.
[39] Base control in Ros -
http://wiki.ros.org/pr2_controllers/Tutorials/Using%20the%20base%20controller%20with%20odometry%20and%20transform%20information,2022年6月。
[40] E. Olson, "AprilTag: A robust and flexible visual fiducial system," 2011 IEEE International Conference on Robotics and Automation, Shanghai , 2011, pp. 3400-3407,doi: 10.1109/ICRA.2011.5979561.
[41] 達明機器人
https://www.tm-robot.com/zh-hant/regular-payload/,2020年5月。
[42] Intel RealSense Help Center D400 Series
https://support.intelrealsense.com/hc/en-us/community/posts/360037076293-Align-color-and-depth-images,2022年6月。
[43] J. Cho, S. Park and S. Chien, "Hole-Filling of RealSense Depth Images Using a Color Edge Map," in IEEE Access, vol. 8, pp. 53901-53914, 2020, doi: 10.1109/ACCESS.2020.2981378.
[44] L. Xiao, "A review of solutions for perspective-n-point problem in camera pose estimation," Journal of Physics: Conference Series,Ancona, Sep. 2018, vol. 1087.
[45] M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,"
Commun. ACM, vol. 24, no. 6, pp. 381–395, 1981.
[46] Intel RealSense Help Center D400 Series
https://support.intelrealsense.com/hc/en-us/community/posts/360037076293-Align-color-and-depth-images,2022年6月。

指導教授 王文俊(Wang,Wen-June) 審核日期 2022-11-21
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明