博碩士論文 108521094 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:38 、訪客IP:3.148.106.49
姓名 管祥祐(Hsiang-Yu Kuan)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 深度學習網路之姿態感測分析與格鬥機器人控制
(Battle robot control using deep learning network based posture detection)
相關論文
★ 使用梳狀濾波器於相位編碼之穩態視覺誘發電位腦波人機介面★ 應用電激發光元件於穩態視覺誘發電位之腦波人機介面判斷
★ 智慧型手機之即時生理顯示裝置研製★ 多頻相位編碼之閃光視覺誘發電位驅動大腦人機介面
★ 以經驗模態分解法分析穩態視覺誘發電位之大腦人機界面★ 利用經驗模態分解法萃取聽覺誘發腦磁波訊號
★ 明暗閃爍視覺誘發電位於遙控器之應用★ 使用整體經驗模態分解法進行穩態視覺誘發電位腦波遙控車即時控制
★ 使用模糊理論於穩態視覺誘發之腦波人機介面判斷★ 利用正向模型設計空間濾波器應用於視覺誘發電位之大腦人機介面之雜訊消除
★ 智慧型心電圖遠端監控系統★ 使用隱馬可夫模型於穩態視覺誘發之腦波人機介面判斷 與其腦波控制遙控車應用
★ 使用類神經網路於肢體肌電訊號進行人體關節角度預測★ 使用等階集合法與影像不均勻度修正於手指靜脈血管影像切割
★ 應用小波編碼於多通道生理訊號傳輸★ 結合高斯混合模型與最大期望值方法於相位編碼視覺腦波人機介面之目標偵測
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2026-8-4以後開放)
摘要(中) 本研究使用可穿戴式感測器獲得表示人類的活動識別(HAR)動態的時間序列數據,分析辨識動作與透過機器人表現,進一步開發出一套即時操控格鬥機器人的系統。受試者配戴五顆實驗室自製的慣性感測器,每顆姿態感測器使用一顆九軸慣性感測元件(IMU)組成用來測量角速度、加速度以及地磁強度資訊,經由WiFi無線傳輸進行資料傳輸。量測位置包含四肢及腰部,來獲取受測者全身的運動狀態。本研究請受試者完成11個格鬥動作與靜止動作當本次HAR數據集的動作。我們利用兩種方法來標記數據中的動作區間提供深度學習網路訓練,IMU的訓練資料標記則採用影片的區間做標記、或者是找出動作的起始-終點進行標記,標記過的數據依照不同的感測器通道數與不同大小的window size,擷取不同的特徵當訓練資料,透過CNN、LSTM和CNN + LSTM三種網路辨識動作,比較各網路中的最佳模型和比較其參數量。經實驗驗證,本系統確實能即時辨識受試者動作,而機器人在表現動作上也能順暢的被操控與正確表現動作。
摘要(英) This study aims to recognize the dynamic human activity recongnition (HAR) of different postures by wearing a set of whole-body motion sensors. The recognized HAR was appliled to instantly control a battle robot. We used our homemade motion sensors, in which each motion sensor consists of a nine-axis sensors(IMU) to measure nine-axis information, including 3-axis angular velocity, 3-axis acceleration and 3-axis geomagnetic. Five motion sensors were used to acquire subjects’ instant motion information and wirelessly tramintted to remote PC for data processing through WiFi connections. The five motion sensors were attached on subjects’ four limbs and the front side of waist to acquire HAR. Subjects were requrested to complete twelve motions, including eleven fighting motions a resting motion. Subjects were asked to move their bodys to follow the fighting actions shown on a vedio clip. The twelve motions were labled by finding the breaks between two consecutive motion actions or labed by finding the onset-offset points of each motion action. The labed data were analyzed using deep learning networks, and the CNN, LSTM and CNN + LSTM models were compared. Parameters in the neural networks, as well as different sensor channel number and window sizes, were tuned to find the best model structure and parameters. The proposed system has been demonstrated to successfully recognize subjects’ different in the initial onset of each motion action.
關鍵字(中) ★ 慣性感測單元
★ 格鬥機器人
★ CNN
關鍵字(英) ★ Interial motion unit
★ Battle Robot
★ CNN
論文目次 中文摘要 i
Abstract ii
目錄 iii
圖目錄 v
表目錄 viii
第一章 緒論 1
1-1 研究動機與目的 1
1-2 文獻探討 2
1-3 論文章節結構 4
第二章 原理介紹 5
2-1 慣性測量單元 5
2-1-1 加速度計 ( Accelerometer ) 6
2-1-2 陀螺儀(Gyroscope) 7
2-1-3 磁力計(Magnetometer) 8
2-2 四元數與歐拉角 9
2-2-1 四元數簡介 9
2-2-2 歐拉角與四元數 12
2-3 人工神經網路 14
2-3-1 類神經網路 14
2-3-2 卷積神經網路 15
2-3-3 長短期記憶神經網路 17
第三章 研究設計與方法 18
3-1 系統架構 18
3-1-1 姿態量測系統硬體架構 19
3-1-2 姿態量測系統軟體架構 23
3-2 動作標籤 24
3-2-1 資料淨化(data cleaning) 24
3-2-2 影片區間標記動作區間 25
3-2-3 Unset detection標記動作區間 26
3-3 實驗方法 31
3-3-1 蒐集資料網路訓練 32
3-3-2 即時判斷系統流程 37
第四章 結果與討論 38
4-1 網路模型比較 38
4-1-1 CNN網路的準確度與F1-scores比較 39
4-1-2 LSTM網路的準確度與F1-scores比較 41
4-1-3 CNN+LSTM網路的準確度與F1-scores比較 43
4-1-4 最佳模型比較 45
4-1-5 混淆矩陣”休息”錯誤辨識 46
4-2 即時判斷結果 48
4-2-1 數據辨識時間 48
4-2-2 測試結果 49
4-2-3 動作延遲判斷 50
第五章 結論與未來展望 53
第六章 參考文獻 54
參考文獻 [1] What are Convolutional Neural Networks and why are they important? Available: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
[2] M.-b. JIANG, X.-d. YAN, and G.-w. J. J. o. H. U. o. T. XU, "The Application of Hall Effect and Hall Effect Sensors in Measuring of Physical Quantity [J]," vol. 2, 2011.
[3] E. Corlett, S. MADELEY, and I. J. E. Manenica, "Posture targeting: a technique for recording working postures," vol. 22, no. 3, pp. 357-366, 1979.
[4] V. Z. J. H. F. Priel, "A numerical definition of posture," vol. 16, no. 6, pp. 576-584, 1974.
[5] P. J. A. e. Holzmann, "ARBAN—A new method for analysis of ergonomic effort," vol. 13, no. 2, pp. 82-86, 1982.
[6] C. Wiktorin, M. Mortimer, L. Ekenvall, Å. Kilbom, E. W. J. S. j. o. w. Hjelm, environment, and health, "HARBO, a simple computer-aided observation method for recording work postures," pp. 440-449, 1995.
[7] S. Knoop, S. Vacek, and R. Dillmann, "Sensor fusion for 3D human body tracking with an articulated 3D body model," in Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006., 2006, pp. 1686-1691: IEEE.
[8] E. S. Sazonov, G. Fulk, N. Sazonova, and S. Schuckers, "Automatic recognition of postures and activities in stroke patients," in 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2009, pp. 2200-2203: IEEE.
[9] J. Ryu, Y. Kim, H. O. Wang, and D. H. Kim, "Wireless control of a board robot using a sensing glove," in 2014 11th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), 2014, pp. 423-428: IEEE.
[10] F. Kobayashi, K. Kitabayashi, H. Nakamoto, and F. Kojima, "Hand/arm robot teleoperation by inertial motion capture," in 2013 Second international conference on robot, vision and signal processing, 2013, pp. 234-237: IEEE.
[11] T. Aoki, J. F.-S. Lin, D. Kulić, and G. Venture, "Segmentation of human upper body movement using multiple IMU sensors," in 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2016, pp. 3163-3166: IEEE.
[12] L. Franco, R. Sengupta, L. Wade, and D. J. P. Cazzola, "A novel IMU-based clinical assessment protocol for Axial Spondyloarthritis: a protocol validation study," vol. 9, p. e10623, 2021.
[13] T. Plötz, N. Y. Hammerla, and P. L. Olivier, "Feature learning for activity recognition in ubiquitous computing," in Twenty-second international joint conference on artificial intelligence, 2011.
[14] E. Kim, S. Helal, and D. J. I. p. c. Cook, "Human activity recognition and pattern discovery," vol. 9, no. 1, pp. 48-53, 2009.
[15] N. Oliver, E. Horvitz, and A. Garg, "Layered representations for human activity recognition," in Proceedings. Fourth IEEE International Conference on Multimodal Interfaces, 2002, pp. 3-8: IEEE.
[16] M. Wen, Z. Luo, W. Wang, and S. Liu, "A characterization of the performance of MEMS vibratory gyroscope in different fields," in 2014 15th International Conference on Electronic Packaging Technology, 2014, pp. 1547-1551: IEEE.
[17] N. White and M. Kraft, MEMS for Automotive and Aerospace Applications. Woodhead Publishing, 2013.
[18] N. J. N. Gupta and C. Systems, "Artificial neural network," vol. 3, no. 1, pp. 24-28, 2013.
[19] U. J. T. d. s. b. Karn, "An intuitive explanation of convolutional neural networks," 2016.
[20] S.-C. Lo, S.-L. Lou, J.-S. Lin, M. T. Freedman, M. V. Chien, and S. K. J. I. t. o. m. i. Mun, "Artificial convolution neural network techniques and applications for lung nodule detection," vol. 14, no. 4, pp. 711-718, 1995.
[21] J. Nagi et al., "Max-pooling convolutional neural networks for vision-based hand gesture recognition," in 2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), 2011, pp. 342-347: IEEE.
[22] S. Wang, Y. Jiang, X. Hou, H. Cheng, and S. J. I. A. Du, "Cerebral micro-bleed detection based on the convolution neural network with rank based average pooling," vol. 5, pp. 16576-16583, 2017.
[23] S. Hochreiter and J. J. N. c. Schmidhuber, "Long short-term memory," vol. 9, no. 8, pp. 1735-1780, 1997.
[24] A. J. P. D. N. P. Sherstinsky, "Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network," vol. 404, p. 132306, 2020.
[25] E. Rahm and H. H. J. I. D. E. B. Do, "Data cleaning: Problems and current approaches," vol. 23, no. 4, pp. 3-13, 2000.
[26] D. G. Altman and J. M. J. B. Bland, "Statistics notes: the normal distribution," vol. 310, no. 6975, p. 298, 1995.
[27] A. Dehghani, O. Sarbishei, T. Glatard, and E. J. S. Shihab, "A quantitative comparison of overlapping and non-overlapping sliding windows for human activity recognition using inertial sensors," vol. 19, no. 22, p. 5026, 2019.
[28] C.-T. Yen, J.-X. Liao, and Y.-K. J. I. A. Huang, "Human Daily Activity Recognition Performed Using Wearable Inertial Sensors Combined With Deep Learning Algorithms," vol. 8, pp. 174105-174114, 2020.
[29] O. Banos, J.-M. Galvez, M. Damas, H. Pomares, and I. J. S. Rojas, "Window size impact in human activity recognition," vol. 14, no. 4, pp. 6474-6499, 2014.
[30] F. Li, K. Shirahama, M. A. Nisar, L. Köping, and M. J. S. Grzegorzek, "Comparison of feature learning methods for human activity recognition using wearable sensors," vol. 18, no. 2, p. 679, 2018.
[31] F. J. Ordóñez and D. J. S. Roggen, "Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition," vol. 16, no. 1, p. 115, 2016.
[32] N. Japkowicz and S. J. I. d. a. Stephen, "The class imbalance problem: A systematic study," vol. 6, no. 5, pp. 429-449, 2002.
[33] A. Hua et al., "Evaluation of machine learning models for classifying upper extremity exercises using inertial measurement unit-based kinematic data," vol. 24, no. 9, pp. 2452-2460, 2020.
[34] N. J. O. R. f. t. I. Ackermann, "Introduction to 1D convolutional neural networks in Keras for time sequences," vol. 8, 2018.
[35] M. Zeng et al., "Convolutional neural networks for human activity recognition using mobile sensors," in 6th international conference on mobile computing, applications and services, 2014, pp. 197-205: IEEE.
[36] R. Chavarriaga et al., "The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition," vol. 34, no. 15, pp. 2033-2042, 2013
指導教授 李柏磊 審核日期 2021-8-23
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明