博碩士論文 110521110 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:14 、訪客IP:18.217.140.32
姓名 趙紹傑(Shao-Chieh Chao)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 具回收分類的機械手臂抓取系統
相關論文
★ 直接甲醇燃料電池混合供電系統之控制研究★ 利用折射率檢測法在水耕植物之水質檢測研究
★ DSP主控之模型車自動導控系統★ 旋轉式倒單擺動作控制之再設計
★ 高速公路上下匝道燈號之模糊控制決策★ 模糊集合之模糊度探討
★ 雙質量彈簧連結系統運動控制性能之再改良★ 桌上曲棍球之影像視覺系統
★ 桌上曲棍球之機器人攻防控制★ 模型直昇機姿態控制
★ 模糊控制系統的穩定性分析及設計★ 門禁監控即時辨識系統
★ 桌上曲棍球:人與機械手對打★ 麻將牌辨識系統
★ 相關誤差神經網路之應用於輻射量測植被和土壤含水量★ 三節式機器人之站立控制
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本論文旨在設計一個具回收分類的機械手臂抓取系統,藉由導入機器視覺與深度學習網路,經由攝影機辨識畫面中未知回收物的種類,並控制六軸機械手臂將畫面中的物品夾取到指定的分類,指定分類內容為塑膠、玻璃、紙容器、金屬容器、保麗龍共五類。
本論文之研究項目敘述如下,透過裝置在機械手臂末端的RGB-D攝影機所輸出的影像完成以下項目:(1)使用深度學習實例分割(instance segmentation)網路分割與辨識物品;(2)利用影像處理技術改善塑膠以及玻璃物品因透明造成深度不完整的原始深度圖像;(3)利用實例分割結果找出各物件的輪廓並計算各物品的面積及中心;(4)透過攝影機內部參數使用透視投影法將深度轉為點雲;(5)將點雲資訊輸入深度學習夾取檢測(grasp detection)網路並輸出抓取參數;(6)將夾取參數轉換為實際夾取位置以及角度並進行中心篩選。另外針對六軸機械手臂夾取目標物品的過程,完成以下項目:(1)使用正向運動學設立不同初始點以及不同分類的終點;(2)建立虛擬環境防止機械手臂在運動過程中與真實環境的障礙物發生碰撞;(3)利用逆向運動學推算機械手臂末端中心到達抓取點時各軸旋轉角度; (4)設定關節角度限制以避免姿態轉換造成機械手臂末端大幅度位移;(5)將位於攝影機坐標系下抓取檢測網路的輸出進行座標轉換用來控制機械手臂到達夾取點。綜合以上技術,可以在符合影像大小與機械手臂結構限制下,運用逆向運動學控制機械手臂夾取目標物品。
本研究在Linux環境下使用機器人作業系統(Robot Operating System, ROS)開發軟體系統,透過ROS分散式的架構與點對點網路,將所有資訊收集在一起並進行資料傳遞並整合,實現軟體硬體協同的設計。
摘要(英) This thesis aims to design a robotic arm-grabbing system for recyclables classification. The unknown recyclables in the image will be identified by RGB-D camera and deep learning network, and the six-axis robotic arm will be controlled to grab the recyclables to the specified boxes. The considered recyclables are plastic, glass, paper container, metal container, and styrofoam.
This thesis focuses on utilizing the output images from an RGB-D camera mounted at the end of a robotic arm to achieve several objectives. They are as follows. (1) A deep learning instance segmentation network is employed to segment and classify objects in the captured images effectively. (2) Image processing techniques are applied to enhance the depth information of transparent things, such as plastic and glass, which may have incomplete depth measurements. (3) The instance segmentation results are utilized to extract the contours of each identified object and then we can enable to calculate their respective areas and centers. (4) Using the camera′s internal parameters, the depth information is transformed into a point cloud. (5) Input the point cloud information into the deep learning grasp detection (grasp detection) network and output the grasping parameters. (6) Convert the grasping parameters into the actual grasping position and angle and then perform center screening. In addition, for the process of the six-axis robot arm picking up the target object, the following tasks are completed. (1) Using forward kinematics to set up different initial points and end points for the robot arm operation. (2) Establishing a virtual environment to prevent collision happening during robot movement. (3) Using inverse kinematics to calculate the rotation angle of each axis when the center of the end of the robot arm reaches the grab point. (4) Setting the joint angle constraints to avoid a major shift at the end of the robot. (5) Calculating the transformation matrix of grasp detection network under the camera frame for the robot arm to reach the grasping point. After the above tasks completed, it is possible to use inverse kinematics to control the robotic arm to grab the target object under the limitation of vision range and the constraints of the mechanism.
This thesis uses the Robot Operating System (Robot Operating System, ROS) to develop the software system in the Linux environment. All information is collected, transmitted, and integrated through the distributed architecture of ROS and the peer-to-peer network to achieve software and hardware collaboration.
關鍵字(中) ★ 運動學
★ 實例分割
★ 六軸機械手臂
★ ROS
★ 座標轉換
★ 抓取檢測
關鍵字(英) ★ Kinematics
★ instance segmentation
★ six-axis robotic arm
★ ROS
★ coordinate transformation
★ grasp detection
論文目次 摘要 i
致謝 iv
目錄 v
圖目錄 vii
表目錄 x
第一章 緒論 1
1.1 研究背景與動機 1
1.2 文獻回顧 1
1.3 論文目標 3
1.4 論文架構 3
第二章 系統架構與軟硬體介紹 5
2.1系統架構 5
2.2硬體架構 6
2.3軟體介紹 10
2.3.1 ROS簡介 10
2.3.2 Moveit套件介紹 13
第三章 影像分割與夾取偵測 15
3.1影像分割 15
3.1.1影像分割網路 15
3.1.2影像分割訓練資料 16
3.1.3訓練結果 17
3.1.4輪廓找取 18
3.2透明物體的深度修補 19
3.3透視投影法 21
3.4夾取偵測 23
3.4.1夾取偵測網路 23
3.4.2網路輸出的抓取姿勢表示 25
3.4.3夾取偵測資料集 25
3.4.4夾取偵測篩選 26
第四章 機械手臂的應用 29
4.1轉換矩陣 29
4.2正向運動學 30
4.3逆向運動學 31
4.4座標轉換 33
4.5機器人作業系統的應用 35
4.3.1 ROS節點功能說明 35
4.3.2 實驗節點與主題流程 37
第五章 實驗結果 39
5.1虛擬環境與真實環境 39
5.2 攝影機影像修正 41
5.3誤差測量 42
5.3.1 RGB-D攝影機深度誤差 42
5.3.2影像夾取點與實際手臂到達誤差 43
5.4實驗流程 44
5.3.1機械手臂初始點 45
5.3.2放置點 46
5.3.3確認工作區是否有物品 46
5.4實驗結果 48
第六章 結論與未來展望 57
參考文獻 58
參考文獻 [1] http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture11.pdf
[2] K. He, G. Gkioxari, P. Dollar, and R. Girshick, "Mask r-cnn," in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2980-2988.
[3] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, "Path aggregation network for instance segmentation," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 8759-8768.
[4] T. -Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature pyramid networks for object detection," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 936-944.
[5] J. Dai, K. He, Y. Li, S. Ren, and J. Sun, "Instance-sensitive fully convolutional networks," in Computer Vision–ECCV 2016: 14th European Conference, 2016, pp. 534-549.
[6] D. Bolya, C. Zhou, F. Xiao, and Y. J. Lee, "Yolact: Real-time instance segmentation," in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 9156-9165.
[7] D. Morrison, P. Corke, and J. Leitner, "Closing the loop for robotic grasping: A real-time, generative grasp synthesis approach," arXiv preprint arXiv:1804.05172, 2018.
[8] S. Kumra, S. Joshi and F. Sahin, "Antipodal robotic grasping using generative residual convolutional neural network," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020, pp. 9626-9633.
[9] K. He, X. Zhang, S. Ren and J. Sun, "Deep residual learning for image recognition," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778.
[10] R. Q. Charles, H. Su, M. Kaichun and L. J. Guibas, "Pointnet: Deep learning on point sets for 3d classification and segmentation," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 77-85.
[11] R. Q. Charles, L. Yi, H. Su, and L. J. Guibas, "Pointnet++: Deep hierarchical feature learning on point sets in a metric space," arXiv preprint arXiv:1706.02413, 2017.
[12] H. -S. Fang, C. Wang, M. Gou and C. Lu, "Graspnet-1billion: A large-scale benchmark for general object grasping," in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 11444–11450.
[13] X. Li, and Lu Wang, "A monocular distance estimation method used in video sequence," in Proc. International Conference on Information and Automation, 2012, pp. 390-394.
[14] Z. Xu, L. Wang, and J. Wang, "A method for distance measurement of moving objects in a monocular image," in Proc. The 3rd IEEE Conference Signal and Image Process, 2018, pp. 245-249.
[15] L. Jianguo, L Weidong, G, Li-e, and L. Le, "Detection and localization of underwater targets based on monocular vision," in Proc. The 2nd International Conference on Advanced Robotics and Mechatronics, 2017, pp. 100-105.
[16] Robot end effector – Wikipedia, June 2023.
Available at : https://en.wikipedia.org/wiki/Robot_end_effector
[17] A. Khan, C. Xiangming, Z. Xingxing and W. L. Quan, "Closed form inverse kinematics solution for 6-DOF underwater manipulator," in International Conference on Fluid Power and Mechatronics (FPM), 2015, pp. 1171-1176.
[18] ]P. Beeson and B. Ames, "TRAC-IK: An open-source library for improved solving of generic inverse kinematics," in IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), 2015, pp. 928-935.
[19] S. Kumar, N. Sukavanam and R. Balasubramanian, "An optimization approach to solve the inverse kinematics of redundant manipulator," International Journal of Information and System Sciences (Institute for Scientific Computing and Information), vol. 6, no. 4, pp. 414-423, 2010.
[20] J.-J. Kim and J.-J. Lee, "Trajectory optimization with particle swarm optimization for manipulator motion planning," in Proc. IEEE Transactions on Industrial Informatics, vol. 11, no. 3, pp. 620-631, Mar. 2015.
[21] 粒子群最佳化-維基百科,2023年6月。Available at : https://zh.wikipedia.org/wiki/%E7%B2%92%E5%AD%90%E7%BE%A4%E4%BC%98%E5%8C%96
[22] J. J. Kuffner Jr and S. M. LaValle, "RRT-Connect: An efficient approach to single-query path planning," in Proc. IEEE International Conference on Robotics and Automation, 2000, pp. 995-1001.
[23] Y. Cui, P. Shi, and J. Hua. "Kinematics analysis and simulation of a 6-DOF humanoid robot manipulator," in Proc. International Asia Conference on Informatics in Control, Automation and Robotics, 2010, pp. 246-249.
[24] Denavit–Hartenberg parameters – Wikipedia, June 2023
Available at : https://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters
[25] J. Vannoy and J. Xiao, "Real-time adaptive motion planning (RAMP) of mobile manipulators in dynamic environments with unforeseen changes," in Proc. IEEE Transactions on Robotics, vol. 24, no. 5, pp. 1199-1212, Oct. 2008.
[26] Cjscope-RZ760 https://www.cjscope.com.tw/product/detail/117,2023 年6月
[27] 達明機器人 https://www.tm-robot.com/zh-hant/regular-payload/,2023 年6月。
[28] Intel® RealSense™ Depth Camera D435i https://www.intelrealsense.com/zh-hans/depth-camera-d435i/ ,2023年6月。
[29] ROS - Wiki http://wiki.ros.org/ROS/Tutorials,2023年6月。
[30] " Moveit官方網站文件檔案觀念說明",2023年6月。
Available at : https://moveit.ros.org/documentation/concepts/
[31] S. Suzuki and K. Abe, "Topological structural analysis of digital binary images by border following, " Computer Vision, Graphics, and Image Processing (CVGIP), vol. 30, issue 1, pp. 32-46, Apr. 1985.
[32] Depth Post-Processing for Intel® RealSense™ Depth Camera D400 Series https://dev.intelrealsense.com/docs/depth-post-processing,2023年6月。
[33] 透視投影-維基百科,2023年6月。Available at : https://zh.wikipedia.org/zh-tw/%E9%80%8F%E8%A7%86%E6%8A%95%E5%BD%B1
[34] Camera Calibration and 3-D Vision - MATLAB & Simulink
https://www.mathworks.com/help/vision/ref/cameracalibrator-app.html, 2023年6月
[35] Z. Zhang, "A flexible new technique for camera calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, Nov. 2000.
[36] W. Kehl, F. Manhardt, F. Tombari, S. Ilic and N. Navab, " Ssd-6d: Making rgb-based 3d detection and 6d pose estimation great again," in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 1521–1529.
[37] Euler angles-維基百科,2023年6月。Available at : https://en.wikipedia.org/wiki/Euler_angles
[38] Denavit–Hartenberg parameters – Wikipedia, June 2023
Available at : https://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters
[39] J. Peng, W. Xu, Z. Wang and D. Meng, "Analytical inverse kinematics and trajectory planning for a 6DOF grinding robot," in IEEE International Conference on Information and Automation (ICIA), 2013, pp. 834-839.
[40] Intel RealSense Help Center D400 Series https://support.intelrealsense.com/hc/en-us/community/posts/360037076293-Align-color-and-depth-images,2023年6月
[41] 張華延, 基於深度學習與影像處理之單眼視覺六軸機械手臂控制, 國立中央大學,電機工程研究所, 碩士論文(王文俊指導), 2019年.
[42] 陳冠霖, 羽球自動收集與檢測之智慧機器人, 國立中央大學,電機工程研究所, 碩士論文(王文俊指導), 2022年.
指導教授 王文俊(Wen-June Wang) 審核日期 2023-7-26
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明