中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/95981
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 41636938      Online Users : 1155
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/95981


    Title: 應用機器視覺於機械手臂隨機物件夾取 與三維人體姿態偵測;Model Learning Based on Machine Vision for Unknown Object 6DoF Grasp and 3D Human Pose Reconstruction
    Authors: 廖子程;Liao, Zi-Cheng
    Contributors: 機械工程學系
    Keywords: 機械手臂;六自由度料堆夾取;深度相機;三維卷積神經網路;HRNet;多相機;三角測量;三維人體姿態;Robotic arm;6DoF grasp;Depth camera;3D convolution neural network;HRNet;Camera array;Triangulation;3D human pose
    Date: 2024-08-15
    Issue Date: 2024-10-09 17:27:55 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 隨著近年的AI熱潮以及各類協作型、人型機器人的問世,協作型機器人的環境感知與任務決策能力在近年已成為學界與業界的重點研究方向,本論文將深入研究協作型機械手臂自主針對三維特徵自主夾取演算法與多相機陣列對於人體姿態的三維重建,並將上述演算法共同結合在同一場域中,使機械手臂在自主完成夾取任務的同時,能實時感知到操作人員的三維姿態訊息,即時做出安全性應對。其中,機械手臂三維特徵自主夾取演算法,為分析深度相機拍攝到的立體資訊,自主決策料堆夾取的姿態,用於碼放隨意堆放的任意工件。本演算法以在點雲上隨機灑點的方式,一步步篩選出可用的六自由度夾取姿態,這些隨機生成的夾取姿態將經過碰撞檢測和三維卷積神經網路模型分類器的篩選,提升夾取成功率的同時,也盡量確保夾取任務執行時的安全性,該演算法在實際夾取任務中的的準確率與穩定性也將在本研究中進行驗證,並探討各類不同的夾取任務環境、參數改變對於演算法的最終輸出影響。多相機陣列對於人體姿態的三維重建將以HRNet作為演算法的基礎,使用客製化的人體關鍵點資料集進行訓練,用於人體關鍵點的二維點檢測,再以多相機陣列基於人體的影像二維座標進行多相機三角測量。本研究中將詳細介紹HRNet對於客製化資料集的訓練策略、針對環繞擺設之多相機陣列的影像內、外部參數校正策略,與最後將兩者結合所建構的人體三維姿態優化演算法。實驗方面將對於多相機陣列的三維重建準確率進行評估,並逐步展示最後所建構的三維人體姿態成果,和優化演算法對於三維人體姿態在單幀與連續幀中的影響。;With the recent AI boom and the emergence of various collaborative and humanoid robots, the environmental perception and task decision-making capabilities of collaborative robots have become a key research direction in academia and industry in recent years. This thesis will discuss the autonomous grasping algorithm for 3D features of collaborative robotic arms and the 3D reconstruction of human posture using a multi-camera array. The autonomous grasping algorithm for 3D features of robotic arms, analyzes the 3D information captured by depth cameras to autonomously grasping objects. This algorithm uses a random seeding method on the point cloud to gradually filter out the available 6DOF grasping postures. These randomly generated grasping postures will be filtered through collision detection and a 3D convolutional neural network model classifier to improve the grasping success rate and ensuring the safety of the grasping task execution. The accuracy and stability of the algorithm in real grasping tasks will also be verified in this study. The 3D reconstruction of human posture using a multi-camera array will be based on the HRNet algorithm, trained using a customized human keypoint dataset for 2D point detection of human keypoints. Then multi-camera triangulation will be performed using the 2D points of the human image in the multi-camera array. This study will detail the training strategy of HRNet for customized datasets, the image intrinsic and extrinsic calibration strategy for multi-camera arrays arranged around the environment, and the final 3D human posture optimization algorithm construction. The accuracy of 3D reconstruction using multi-camera arrays will be evaluated experimentally, and the final 3D human posture results and the impact of the optimization algorithm on 3D human posture in single and continuous frames will be gradually demonstrated
    Appears in Collections:[Graduate Institute of Mechanical Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML42View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明