中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/86749
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 78852/78852 (100%)
造访人次 : 38473267      在线人数 : 1601
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/86749


    题名: 深度學習網路之姿態感測分析與格鬥機器人控制;Battle robot control using deep learning network based posture detection
    作者: 管祥祐;Kuan, Hsiang-Yu
    贡献者: 電機工程學系
    关键词: 慣性感測單元;格鬥機器人;CNN;Interial motion unit;Battle Robot;CNN
    日期: 2021-08-23
    上传时间: 2021-12-07 13:10:44 (UTC+8)
    出版者: 國立中央大學
    摘要: 本研究使用可穿戴式感測器獲得表示人類的活動識別(HAR)動態的時間序列數據,分析辨識動作與透過機器人表現,進一步開發出一套即時操控格鬥機器人的系統。受試者配戴五顆實驗室自製的慣性感測器,每顆姿態感測器使用一顆九軸慣性感測元件(IMU)組成用來測量角速度、加速度以及地磁強度資訊,經由WiFi無線傳輸進行資料傳輸。量測位置包含四肢及腰部,來獲取受測者全身的運動狀態。本研究請受試者完成11個格鬥動作與靜止動作當本次HAR數據集的動作。我們利用兩種方法來標記數據中的動作區間提供深度學習網路訓練,IMU的訓練資料標記則採用影片的區間做標記、或者是找出動作的起始-終點進行標記,標記過的數據依照不同的感測器通道數與不同大小的window size,擷取不同的特徵當訓練資料,透過CNN、LSTM和CNN + LSTM三種網路辨識動作,比較各網路中的最佳模型和比較其參數量。經實驗驗證,本系統確實能即時辨識受試者動作,而機器人在表現動作上也能順暢的被操控與正確表現動作。;This study aims to recognize the dynamic human activity recongnition (HAR) of different postures by wearing a set of whole-body motion sensors. The recognized HAR was appliled to instantly control a battle robot. We used our homemade motion sensors, in which each motion sensor consists of a nine-axis sensors(IMU) to measure nine-axis information, including 3-axis angular velocity, 3-axis acceleration and 3-axis geomagnetic. Five motion sensors were used to acquire subjects’ instant motion information and wirelessly tramintted to remote PC for data processing through WiFi connections. The five motion sensors were attached on subjects’ four limbs and the front side of waist to acquire HAR. Subjects were requrested to complete twelve motions, including eleven fighting motions a resting motion. Subjects were asked to move their bodys to follow the fighting actions shown on a vedio clip. The twelve motions were labled by finding the breaks between two consecutive motion actions or labed by finding the onset-offset points of each motion action. The labed data were analyzed using deep learning networks, and the CNN, LSTM and CNN + LSTM models were compared. Parameters in the neural networks, as well as different sensor channel number and window sizes, were tuned to find the best model structure and parameters. The proposed system has been demonstrated to successfully recognize subjects’ different in the initial onset of each motion action.
    显示于类别:[電機工程研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML88检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明