中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/96397
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 81570/81570 (100%)
Visitors : 47116282      Online Users : 503
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/96397


    Title: Advancing Human Action Recognition for Precision Assembly Using Vision and Mechanomyography Signals
    Authors: 泰利;Teeli, Ashiq Hussain
    Contributors: 機械工程學系
    Keywords: 人體動作辨識;精確組裝;穿戴式感測器;手部動作辨識;深度學習模型;過渡不穩定性;工業安全;人機協作;精細動作辨識;Human Action Recognition;Precision Assembly;Wearable Sensors;Hand Action Recognition;Deep Learning Model;Transition Instability;Industrial Safety;Human-Robot Collaboration;Fine-Action Recognition
    Date: 2024-11-11
    Issue Date: 2025-04-09 18:25:23 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 中文摘要
    本研究著重於精密組裝環境中之連續人體動作識別的挑戰,特別解決因僅應用攝影機
    在識別小動作、過渡不穩定性和整體準確性方面的局限性。為此,提出了一種新穎的
    多模組系統,包含識別身體動作的相機、識別精密手部動作的手環、和確認手部動作
    識別是否合乎啟動條件的次要相機。這些訊號輸入給身體與手部動作辨識模組,最後
    以決策模組統合兩者輸出以形成更好的識別結果。透過實際任務,包含樂高汽車組裝
    和電子連接器組裝任務來評估系統的性能,並比較了 AE+LSTM、LSTM+Attention 及
    單純 LSTM 三種深度學習模型的性能。結果表明,LSTM+Attention 模型在手部和身體
    動作辨識方面均有著優越的表現。其次,所提出之方法在識別大規模身體動作和小手
    部動作方面都有顯著改進,而且在精細動作識別方面顯著優於基於單純攝影機的系統。
    最後,決策模型有效地管理了過渡不穩定,提高了HAR系統的整體可靠性。總結而言,
    這項研究為精密組裝環境提出了強大的解決方案,為 HAR 領域做出了貢獻,有可能提
    高工業環境中的安全性、效率和人機協作。未來的工作應該集中在改進演算法以更好
    地處理噪音。此外,也應確保 HAR 系統在動態工業環境中對使用者友好且有效。;Abstract
    This study addresses the challenges of continuous Human Action Recognition (HAR) in
    precision assembly environments, focusing on the limitations of camera-based systems in
    recognizing small actions, transition instability, and overall accuracy. To this end, a novel
    multi-module system is proposed, including a camera that recognizes body movements, a
    bracelet that recognizes precise hand movements, and a secondary camera that confirms
    whether hand movement recognition meets the startup conditions. The sensing signals are input
    to the body and hand action recognition modules, and finally the decision-making module
    integrates their outputs to form better recognition results. This research employed an
    experimental approach using LEGO car assembly and electronic connector assembly tasks to
    evaluate the performance of the system. Three deep learning models, AE + LSTM, LSTM +
    Attention, and LSTM, were compared. The results show that the LSTM + Attention model
    demonstrated superior performance in both hand and body action recognition. Also, significant
    improvements in recognizing both large-scale body movements and small hand actions, and
    here the wearable sensor outperforming the camera-based system in fine-action recognition.
    Finally, the decision-making model effectively managed transition instability and enhanced the
    overall reliability of the HAR system. This research contributes to the field of HAR by
    proposing a robust solution for precision assembly environments, potentially improving safety,
    efficiency, and human-robot collaboration in industrial settings. Future work should focus on
    refining the algorithms to better handle noise. Additionally, emphasis should be placed on
    ensuring that the HAR system is user friendly and effective in dynamic industrial settings.
    Appears in Collections:[Graduate Institute of Mechanical Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML10View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明