中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/86749
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 78852/78852 (100%)
造訪人次 : 38473819      線上人數 : 187
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/86749


    題名: 深度學習網路之姿態感測分析與格鬥機器人控制;Battle robot control using deep learning network based posture detection
    作者: 管祥祐;Kuan, Hsiang-Yu
    貢獻者: 電機工程學系
    關鍵詞: 慣性感測單元;格鬥機器人;CNN;Interial motion unit;Battle Robot;CNN
    日期: 2021-08-23
    上傳時間: 2021-12-07 13:10:44 (UTC+8)
    出版者: 國立中央大學
    摘要: 本研究使用可穿戴式感測器獲得表示人類的活動識別(HAR)動態的時間序列數據,分析辨識動作與透過機器人表現,進一步開發出一套即時操控格鬥機器人的系統。受試者配戴五顆實驗室自製的慣性感測器,每顆姿態感測器使用一顆九軸慣性感測元件(IMU)組成用來測量角速度、加速度以及地磁強度資訊,經由WiFi無線傳輸進行資料傳輸。量測位置包含四肢及腰部,來獲取受測者全身的運動狀態。本研究請受試者完成11個格鬥動作與靜止動作當本次HAR數據集的動作。我們利用兩種方法來標記數據中的動作區間提供深度學習網路訓練,IMU的訓練資料標記則採用影片的區間做標記、或者是找出動作的起始-終點進行標記,標記過的數據依照不同的感測器通道數與不同大小的window size,擷取不同的特徵當訓練資料,透過CNN、LSTM和CNN + LSTM三種網路辨識動作,比較各網路中的最佳模型和比較其參數量。經實驗驗證,本系統確實能即時辨識受試者動作,而機器人在表現動作上也能順暢的被操控與正確表現動作。;This study aims to recognize the dynamic human activity recongnition (HAR) of different postures by wearing a set of whole-body motion sensors. The recognized HAR was appliled to instantly control a battle robot. We used our homemade motion sensors, in which each motion sensor consists of a nine-axis sensors(IMU) to measure nine-axis information, including 3-axis angular velocity, 3-axis acceleration and 3-axis geomagnetic. Five motion sensors were used to acquire subjects’ instant motion information and wirelessly tramintted to remote PC for data processing through WiFi connections. The five motion sensors were attached on subjects’ four limbs and the front side of waist to acquire HAR. Subjects were requrested to complete twelve motions, including eleven fighting motions a resting motion. Subjects were asked to move their bodys to follow the fighting actions shown on a vedio clip. The twelve motions were labled by finding the breaks between two consecutive motion actions or labed by finding the onset-offset points of each motion action. The labed data were analyzed using deep learning networks, and the CNN, LSTM and CNN + LSTM models were compared. Parameters in the neural networks, as well as different sensor channel number and window sizes, were tuned to find the best model structure and parameters. The proposed system has been demonstrated to successfully recognize subjects’ different in the initial onset of each motion action.
    顯示於類別:[電機工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML88檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明