博碩士論文 104522075 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:79 、訪客IP:3.15.202.186
姓名 曾紹武(Shao-Wu Tseng)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 以深度類神經網路為基礎之居家生活動作辨識系統
(A DNN-based System for the Recognition of the Activities of Daily Living)
相關論文
★ 以Q-學習法為基礎之群體智慧演算法及其應用★ 發展遲緩兒童之復健系統研製
★ 從認知風格角度比較教師評量與同儕互評之差異:從英語寫作到遊戲製作★ 基於檢驗數值的糖尿病腎病變預測模型
★ 模糊類神經網路為架構之遙測影像分類器設計★ 複合式群聚演算法
★ 身心障礙者輔具之研製★ 指紋分類器之研究
★ 背光影像補償及色彩減量之研究★ 類神經網路於營利事業所得稅選案之應用
★ 一個新的線上學習系統及其於稅務選案上之應用★ 人眼追蹤系統及其於人機介面之應用
★ 結合群體智慧與自我組織映射圖的資料視覺化研究★ 追瞳系統之研發於身障者之人機介面應用
★ 以類免疫系統為基礎之線上學習類神經模糊系統及其應用★ 基因演算法於語音聲紋解攪拌之應用
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近年來拜醫療科技的進步,台灣社會面臨人口結構高齡化的問題,而 隨這年輕子女的外移,獨居老人照護問題也比以往更加需要被關注,如何 有效且即時的對獨居老人的活動量進行量測是目前很重要的一個議題。本 論文以深度類神經網路為基礎開發了一套居家生活動作辨識系統,提出兩 套特徵截取方式,分別為以數值方式以及圖像方式將彩色攝影機所截取的 骨架資料萃取特徵,再以類神經網路抓分析動作,並將動作留存下來,以 備使用者隨時查看,系統內更有一種情境的跌倒偵測,防止意外的發生, 而在獨居老人的使用情境更可提供給長期照顧機構做為獨居老人活動力的 參考。
本論文設計的十種居家生活動作,包括一種基本的跌倒動作,用來識 別跌倒的情況,並以此蒐集資料集用以訓練及測試類神經網路,並在角度 推廣性以及不同人間之推廣性都有九成以上的推廣能力,在實際測試系統 時有 92.93%的準確率,能證明本系統在居家動作辨識上有很好的可靠性供 使用者參考。
摘要(英) In recent years, because of the improvement of medical technology, Taiwan is facing the severe problem of population aging. Since young people move out for work or marriage, the health care of independent-living Elderly is more important than ever. How to measure the activities of daily living for the elderly in an effective way is the crucial issue nowadays. In this paper, we developed a DNN-based System for the Recognition of the Activities of Daily Living. The system estimates skeleton data from the color image, which is recorded from webcam or surveillance system, and using the neural network like CNN, BPN or DNN to classify these features proposed by this paper. After recognized motions, we log the data in order to give the user a daily report.
In this paper, we design ten different activities of daily living including one Scene of falling movement, and testing these data with angular tolerance and person independent experiments. In these experiments, we obtained a great result of over 90% recognition rate. Even in the real-life test, this system precision rate can also achieve 92.93%. With these experiments, we can prove that the system is good enough to provide a robust report to the user for consulting.
關鍵字(中) ★ ADL
★ 老人照護
★ 跌倒
★ 影像監控
★ 類神經網路
關鍵字(英) ★ ADL
★ eldercare
★ fall detect system
★ video surveillance
★ neural network
論文目次
第一章、緒論 1
1-1 研究動機 1
1-2 研究目的 2
1-3 論文架構 3
第二章、相關研究 4
2-1 日常生活功能評估量表 4
2-2 動作辨識 5
2-3 姿態估測 8
2-3-1 深度影像姿態估測 8
2-3-2 2維影像姿態估測 10
2-4 類神經網路 12
2-4-1 感知機 12
2-4-2 倒傳遞類神經網路 14
2-4-3 卷積類神經網路 17
2-5 深度學習-類神經網路套件 23
2-5-1 TensorFlow 24
2-5-2 Caffe 25
第三章、研究方法 26
3-1 軟體流程架構 26
3-2 正規化方式 28
3-3 特徵截取方式 31
3-3-1 數值特徵截取方式 31
3-3-2 卷積類神經網路特徵截取方式 37
3-4 後處理 38
第四章、實驗設計與結果 40
4-1 實驗設計 40
4-2 資料集 41
4-2-1 資料集拍攝方式 41
4-2-2 居家動作情境 43
4-2-3 資料集骨架 47
4-3 網路架構訓練 49
4-4 截取特徵測試 52
4-5 角度推廣性測試 55
4-6 不同人間推廣性測試 58
4-7 滑動窗口實驗 61
4-8 實際測試 62
4-9 實驗結果比較 63
第五章、結論與未來展望 66
5-1 結論 66
5-2 未來展望 67
參考文獻 68
參考文獻 [1] 106 年第 10 週內政統計通報. [Online]. Available: http://www.moi. gov.tw/stat/news_content.aspx?sn=11735. [Accessed: 4-Jul-2017].
[2] Kinect Interactive Games. [Online]. Avaliable: http://x-tech.am/kinect- interactive-games-game-development-company/. [Accessed: 4-Jul-
2017].
[3] B. D. Lukas, and T. Kanade, “An iterative image registration technique
with an application to stereo vision,” 7th International Joint Conference on Artificial Intelligence, pp. 674-679, 1981.
[4] I. Laptev, and T. Lindeberg, “Local Descriptors for Spatio-Temporal Recognition,” Spatial Coherence for Visual Motion Analysis, Vol. 3667,
pp. 91-103, 2004.
[5] A. F. Bobick, and J. W. Davis, “The recognition of human movement
using temporal templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23 No. 3 pp. 257-267, 2001.
[6] S. Danafar, and N. Gheissari, “Action recognition for surveillance applications using optic flow and SVM,” Proceedings of the Asian
Conference on Computer Vision, Vol. 4844, pp. 457-466, 2007.
[7] I. Laptev, and T Lindeberg, “Space–time interest points,” Proceedings of the International Conference on Computer Vision, vol. 1 pp. 432-
439, 2003.
68
[8] C. Harris, and M.J. Stephens. “A combined corner and edge detector,” Alvey Vision Conference, pp. 147–152, 1988.
[9] D. G. Lowe, “Distinctive image features from scale-invariant keypoints”, International Journal of Computer Vision, Vol. 60 No. 2 pp.
91-110, 2004.
[10] H. Bay, A. Ess, T. Tuytelaars, and L. J. V. Gool, “SURF: Speeded up
robust features,” Computer Vision and Image Understanding, Vol. 110 No. 3 pp. 346-359, 2008.
[11] Wikipedia Kinect. [Online]. Available: https://en.wikipedia.org/wiki/
Kinect. [Accessed: 22-May-2017].
[12] J. Shottom, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R.Moore, A. Kipman and A. Blake “Real-Time Human Pose Recognition in Parts from Single Depth Images,” IEEE Computer Vision and Pattern
Recognition, pp. 1297-1304, 2011.
[13] H. T. Kam, “Random Decision Forests,” International Conference on
Document Analysis and Recognition, Vol. 1, pp. 278–282, 1995.
[14] D. Comaniciu, and P. Meer, “Mean Shift: A Robust Approach Toward Feature Space Analysis,” Pattern Analysis and Machine Intelligence,
Vol. 24, pp. 603-619, 2002.
69
[15] Houston Ballet II dancers Liana Carpio & Chunwai Chan perform Ben Stevenson′s Sylvia. Photo: Amitava Sarkar, [Online]. Available: https://houstonballet.wordpress.com/2011/08/23/18th-annual-theater- district-open-house/. [Accessed: 22-May-2017].
[16] Z. Cao, T. Simon, S. E. Wei, and Y. Sheikh, “Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields,” Conference on Computer
Vision and Pattern Recognition, 2017
[17] 維基百科:感知器. [Online]. Available: https://zh.wikipedia.org/wiki/
感知器. [Accessed: 22-May-2017].
[18] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, Vol. 323, pp. 533-
536, 1986.
[19] 蘇木春、張孝德,機械學習:類神經網路、模糊系統以及基因演算
法則,全華圖書股份有限公司,民國一百零一年。
[20] V. Nair, and G. Hinton, “Rectified linear units improve restricted Boltzmann machines,” 27th International Conference on Machine
Learning, pp. 807, 2010.
[21] S. Hochreiter , Y. Bengio , P. Frasconi, and J. Schmidhuber, “Gradient
flow in recurrent nets: the difficulty of learning long-term dependencies,” International Conference on Machine Learning, pp.
807-814, 2001.
70
[22] Using large-scale brain simulations for machine learning and A.I. [Online]. Available: https://googleblog.blogspot.tw/2012/06/using- large-scale-brain-simulations-for.html. [Accessed: 22-May-2017].
[23] Get off the deep learning bandwagon and get some perspective. [Online]. Available: http://www.pyimagesearch.com/2014/06/09/get- deep-learning-bandwagon-get-perspective/. [Accessed: 22-May-2017].
[24] M. D. Zeiler, and R. Fergus, “Visualizing and Understanding Convolutional Networks,” 13th European Conference on Computer
Vision, Vol. 8689, pp. 818-833, 2014.
[25] Deep learning for complete beginners: convolutional neural networks
with keras. [Online]. Available: https://cambridgespark.com/content/ tutorials/convolutional-neural-networks-with-keras/index.html. [Accessed: 22-May-2017].
[26] Wiki- Convolutional neural network. [Online]. Available: https://en. wikipedia.org/wiki/Convolutional_neural_network. [Accessed: 22-
May-2017].
[27] TensorFlow. [Online]. Avaliable: https://www.tensorflow.org.
[Accessed: 4-Jul-2017].
[28] An in-depth look at Google’s first Tensor Processing Unit (TPU). [Online]. Avaliable: https://cloud.google.com/blog/big-data/2017/05/
an-in-depth-look-at-googles-first-tensor-processing-unit-tpu.[Accessed: 4-Jul-2017].
71
[29] Caffe. [Online]. Avaliable: http://caffe.berkeleyvision.org. [Accessed: 4-Jul-17].
[30] Wiki atan2. [Online]. Available: https://en.wikipedia.org/wiki/Atan2. [Accessed: 4-Jul-2017].
[31] Panasonic Shop. [Online]. Available: http://shop.panasonic.com/
support-only/DMC-LX3K.html-q=lx3&start=1.[Accessed: 4-Jul-2017].
指導教授 蘇木春(Mu-Chun Su) 審核日期 2017-8-14
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明