博碩士論文 106521084 完整後設資料紀錄

DC 欄位 語言
DC.contributor電機工程學系zh_TW
DC.creator張華延zh_TW
DC.creatorHun-Yen Changen_US
dc.date.accessioned2019-7-16T07:39:07Z
dc.date.available2019-7-16T07:39:07Z
dc.date.issued2019
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=106521084
dc.contributor.department電機工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract本論文主要目的為藉著視覺系統的導入以逆向運動學控制六軸機械手臂移動並且夾取五種不同的物品。經由影像偵測與辨識目標物品以及計算目標物品與機械手臂之間的相對位置,在符合影像大小以及機械手臂的機構限制下,將目標物品隨意放置皆能完成夾取。 本研究在Linux環境下使用機器人作業系統(Robot Operating System, ROS)開發軟體系統,透過ROS分散式的架構與點對點網路,將所有資訊收集在一起進行資料傳遞並整合NVIDIA Jetson TX2、六軸機械手臂、工業相機以及夾爪,實現軟體硬體協同的設計。 本論文之研究項目敘述如下,透過裝置在機械手臂末端的工業相機所輸出的單眼視覺影像完成以下三點:(1)使用深度學習偵測與辨識物品;(2)利用影像處理技術改善深度學習技術輸出的物品邊框(Bounding Box);(3)透過相機針孔模型計算出目標物品與相機之間的相對位置。另外針對六軸機械手臂夾取目標物品的過程,我們完成以下項目:(1)使用正向運動學求出兩個不同座標系之間的相對位置以及相對角度;(2)設立空間中特定點利用逆向運動學求算工具端中心到達該點時各軸旋轉角度;(3)建置虛擬環境防止機械手臂在運動過程中與障礙物發生碰撞;(4)設定關節角度限制以避免姿態轉換造成機械手臂末端大幅度位移;(5)運用路徑限制避免運動過程中與目標物品發生碰撞;(6)利用路徑規劃建立初始點與目標點中之間的中繼點。綜合以上技術,可以在符合影像大小與機械手臂結構限制下,運用逆向運動學控制機械手臂夾取目標物品。 zh_TW
dc.description.abstractThe main purpose of this paper is to control six degrees of freedom (6DOF) robot to achieve a pick-and-place application for five different objects. The relative position between the object and the robot is calculated via using vision to detect and identify the objects, in addition, the objects are randomly placed inside the mechanical limit of the robot and satisfy the image size of vision. By receiving the information from vision, the robot can successfully pick and place the object. Robot operating system (ROS) is used to develop a software system under Linux environment in this study. The NVIDIA Jetson TX2, the robot, the industrial camera and the gripper are integrated by ROS distributed architecture and peer-to-peer network, and all information and data collected can be transferred to them as well. Therefore, the collaborative design is used to realize the integrated software and hardware. The most important point is that we use machine vision to detect, identify the target objects and calculate the relative position between each object and the robot arm. We complete the following three steps through the monocular vision from the industrial camera which is mounted on the end of the robot. First, we use deep learning technology to detect and identify objects. Second, we improve the bounding box of deep learning technology result by using Image process technology. Third, we calculate the relative position between the object and the camera by the pinhole camera model. With regard to the robot application, we complete the following tasks. First, the relative position and angle between two different frames are calculated by using forward kinematics. Second, a specific point is set and every joint angle is calculated through inverse kinematics when the robot tool center arrives at the point. Third, we set a virtual environment to prevent collision happening during robot movement. Fourth, we set the joint angle constraints to avoid a major shift at the end of the robot. Fifth, using path constraint to prevent collision between the robot and the target object. Sixth, a series of middle points between the initial point and the target point can be found by using trajectory planning. After the above tasks completed, the robot can implement a randomly pick-and-place application through inverse kinematics under the limitation of vision range and the constraints of the mechanism. en_US
DC.subject運動學zh_TW
DC.subject六軸機械手臂zh_TW
DC.subjectROSzh_TW
DC.subject單眼視覺zh_TW
DC.subject深度學習zh_TW
DC.subject影像處理zh_TW
DC.subject路徑規劃zh_TW
DC.subjectKinematicsen_US
DC.subject6 DOF roboten_US
DC.subjectROSen_US
DC.subjectMonocular visionen_US
DC.subjectDeep learningen_US
DC.subjectImage processingen_US
DC.subjectTrajectory planningen_US
DC.title基於深度學習與影像處理技術之單眼視覺六軸機械手臂控制zh_TW
dc.language.isozh-TWzh-TW
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明