DC 欄位 |
值 |
語言 |
DC.contributor | 電機工程學系 | zh_TW |
DC.creator | 管祥祐 | zh_TW |
DC.creator | Hsiang-Yu Kuan | en_US |
dc.date.accessioned | 2021-8-23T07:39:07Z | |
dc.date.available | 2021-8-23T07:39:07Z | |
dc.date.issued | 2021 | |
dc.identifier.uri | http://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=108521094 | |
dc.contributor.department | 電機工程學系 | zh_TW |
DC.description | 國立中央大學 | zh_TW |
DC.description | National Central University | en_US |
dc.description.abstract | 本研究使用可穿戴式感測器獲得表示人類的活動識別(HAR)動態的時間序列數據,分析辨識動作與透過機器人表現,進一步開發出一套即時操控格鬥機器人的系統。受試者配戴五顆實驗室自製的慣性感測器,每顆姿態感測器使用一顆九軸慣性感測元件(IMU)組成用來測量角速度、加速度以及地磁強度資訊,經由WiFi無線傳輸進行資料傳輸。量測位置包含四肢及腰部,來獲取受測者全身的運動狀態。本研究請受試者完成11個格鬥動作與靜止動作當本次HAR數據集的動作。我們利用兩種方法來標記數據中的動作區間提供深度學習網路訓練,IMU的訓練資料標記則採用影片的區間做標記、或者是找出動作的起始-終點進行標記,標記過的數據依照不同的感測器通道數與不同大小的window size,擷取不同的特徵當訓練資料,透過CNN、LSTM和CNN + LSTM三種網路辨識動作,比較各網路中的最佳模型和比較其參數量。經實驗驗證,本系統確實能即時辨識受試者動作,而機器人在表現動作上也能順暢的被操控與正確表現動作。 | zh_TW |
dc.description.abstract | This study aims to recognize the dynamic human activity recongnition (HAR) of different postures by wearing a set of whole-body motion sensors. The recognized HAR was appliled to instantly control a battle robot. We used our homemade motion sensors, in which each motion sensor consists of a nine-axis sensors(IMU) to measure nine-axis information, including 3-axis angular velocity, 3-axis acceleration and 3-axis geomagnetic. Five motion sensors were used to acquire subjects’ instant motion information and wirelessly tramintted to remote PC for data processing through WiFi connections. The five motion sensors were attached on subjects’ four limbs and the front side of waist to acquire HAR. Subjects were requrested to complete twelve motions, including eleven fighting motions a resting motion. Subjects were asked to move their bodys to follow the fighting actions shown on a vedio clip. The twelve motions were labled by finding the breaks between two consecutive motion actions or labed by finding the onset-offset points of each motion action. The labed data were analyzed using deep learning networks, and the CNN, LSTM and CNN + LSTM models were compared. Parameters in the neural networks, as well as different sensor channel number and window sizes, were tuned to find the best model structure and parameters. The proposed system has been demonstrated to successfully recognize subjects’ different in the initial onset of each motion action. | en_US |
DC.subject | 慣性感測單元 | zh_TW |
DC.subject | 格鬥機器人 | zh_TW |
DC.subject | CNN | zh_TW |
DC.subject | Interial motion unit | en_US |
DC.subject | Battle Robot | en_US |
DC.subject | CNN | en_US |
DC.title | 深度學習網路之姿態感測分析與格鬥機器人控制 | zh_TW |
dc.language.iso | zh-TW | zh-TW |
DC.title | Battle robot control using deep learning network based posture detection | en_US |
DC.type | 博碩士論文 | zh_TW |
DC.type | thesis | en_US |
DC.publisher | National Central University | en_US |