English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41077815      線上人數 : 781
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/81397


    題名: 基於深度學習與影像處理技術之單眼視覺六軸機械手臂控制
    作者: 張華延;Chang, Hun-Yen
    貢獻者: 電機工程學系
    關鍵詞: 運動學;六軸機械手臂;ROS;單眼視覺;深度學習;影像處理;路徑規劃;Kinematics;6 DOF robot;ROS;Monocular vision;Deep learning;Image processing;Trajectory planning
    日期: 2019-07-16
    上傳時間: 2019-09-03 15:51:01 (UTC+8)
    出版者: 國立中央大學
    摘要: 本論文主要目的為藉著視覺系統的導入以逆向運動學控制六軸機械手臂移動並且夾取五種不同的物品。經由影像偵測與辨識目標物品以及計算目標物品與機械手臂之間的相對位置,在符合影像大小以及機械手臂的機構限制下,將目標物品隨意放置皆能完成夾取。
    本研究在Linux環境下使用機器人作業系統(Robot Operating System, ROS)開發軟體系統,透過ROS分散式的架構與點對點網路,將所有資訊收集在一起進行資料傳遞並整合NVIDIA Jetson TX2、六軸機械手臂、工業相機以及夾爪,實現軟體硬體協同的設計。
    本論文之研究項目敘述如下,透過裝置在機械手臂末端的工業相機所輸出的單眼視覺影像完成以下三點:(1)使用深度學習偵測與辨識物品;(2)利用影像處理技術改善深度學習技術輸出的物品邊框(Bounding Box);(3)透過相機針孔模型計算出目標物品與相機之間的相對位置。另外針對六軸機械手臂夾取目標物品的過程,我們完成以下項目:(1)使用正向運動學求出兩個不同座標系之間的相對位置以及相對角度;(2)設立空間中特定點利用逆向運動學求算工具端中心到達該點時各軸旋轉角度;(3)建置虛擬環境防止機械手臂在運動過程中與障礙物發生碰撞;(4)設定關節角度限制以避免姿態轉換造成機械手臂末端大幅度位移;(5)運用路徑限制避免運動過程中與目標物品發生碰撞;(6)利用路徑規劃建立初始點與目標點中之間的中繼點。綜合以上技術,可以在符合影像大小與機械手臂結構限制下,運用逆向運動學控制機械手臂夾取目標物品。
    ;The main purpose of this paper is to control six degrees of freedom (6DOF) robot to achieve a pick-and-place application for five different objects. The relative position between the object and the robot is calculated via using vision to detect and identify the objects, in addition, the objects are randomly placed inside the mechanical limit of the robot and satisfy the image size of vision. By receiving the information from vision, the robot can successfully pick and place the object.
    Robot operating system (ROS) is used to develop a software system under Linux environment in this study. The NVIDIA Jetson TX2, the robot, the industrial camera and the gripper are integrated by ROS distributed architecture and peer-to-peer network, and all information and data collected can be transferred to them as well. Therefore, the collaborative design is used to realize the integrated software and hardware.
    The most important point is that we use machine vision to detect, identify the target objects and calculate the relative position between each object and the robot arm. We complete the following three steps through the monocular vision from the industrial camera which is mounted on the end of the robot. First, we use deep learning technology to detect and identify objects. Second, we improve the bounding box of deep learning technology result by using Image process technology. Third, we calculate the relative position between the object and the camera by the pinhole camera model. With regard to the robot application, we complete the following tasks. First, the relative position and angle between two different frames are calculated by using forward kinematics. Second, a specific point is set and every joint angle is calculated through inverse kinematics when the robot tool center arrives at the point. Third, we set a virtual environment to prevent collision happening during robot movement. Fourth, we set the joint angle constraints to avoid a major shift at the end of the robot. Fifth, using path constraint to prevent collision between the robot and the target object. Sixth, a series of middle points between the initial point and the target point can be found by using trajectory planning. After the above tasks completed, the robot can implement a randomly pick-and-place application through inverse kinematics under the limitation of vision range and the constraints of the mechanism.
    顯示於類別:[電機工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML105檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明