博碩士論文 110522102 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:68 、訪客IP:3.22.248.247
姓名 張智穎(Chih-Ying Chang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於深度學習之幼兒居家危險行為監測系統
(A Deep Learning-Based Home Safety Behavior Monitoring System for Children)
相關論文
★ 以Q-學習法為基礎之群體智慧演算法及其應用★ 發展遲緩兒童之復健系統研製
★ 從認知風格角度比較教師評量與同儕互評之差異:從英語寫作到遊戲製作★ 基於檢驗數值的糖尿病腎病變預測模型
★ 模糊類神經網路為架構之遙測影像分類器設計★ 複合式群聚演算法
★ 身心障礙者輔具之研製★ 指紋分類器之研究
★ 背光影像補償及色彩減量之研究★ 類神經網路於營利事業所得稅選案之應用
★ 一個新的線上學習系統及其於稅務選案上之應用★ 人眼追蹤系統及其於人機介面之應用
★ 結合群體智慧與自我組織映射圖的資料視覺化研究★ 追瞳系統之研發於身障者之人機介面應用
★ 以類免疫系統為基礎之線上學習類神經模糊系統及其應用★ 基因演算法於語音聲紋解攪拌之應用
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 居家環境為幼兒事故頻發的場所,而幼童通常在家庭和幼兒園環境
中度過大部分時間。因此,確保居家安全對於幼童的安全至關重要,而
其中跌倒更是最為常見的事故。目前現存的幼兒危險偵測方法主要以穿
戴式感測器為主,功能單一且使用不便。此外,現有的深度學習方法在
居家環境中的跌倒偵測研究相對較少,仍有許多值得探索的方向。
因此,本論文提出一個基於深度學習技術的幼兒危險行為監測系統,
旨在即時辨識幼兒的動作並偵測跌倒事故。本系統將幼兒的姿勢分為五
個類別:站姿、趴姿、躺姿、坐姿和跌倒。其中跌倒被視為最重要的動
作,當系統偵測出跌倒的發生,將快速發出警報或通知給家長或監護人。
由於目前未有公開的幼兒動作資料集,故本文蒐集了網路上的真實
幼兒影片,共計1006 部影片,包含了居家環境中各種視角的幼兒影片。
並以連續影像作為輸入,利用深度學習、影像處理演算法和骨架辨識等
技術,能夠識別幼兒的動作,並進一步檢測跌倒事故的發生。
本系統在動作辨識方面的accuracy 為89.1%。在跌倒偵測方面的
precision 為76.1%,recall 為81.6%。在執行速度上也能達成即時辨識的效果。這些結果證明本系統能有效監測幼兒的危險行為,對於幼兒護理、
安全監控等領域也具有潛在的應用價值。
摘要(英) The home environment is where accidents frequently occur for children, and children typically spend most of their time in the home and kindergarten environment. Therefore, ensuring a safe home is critical to the safety of children, and falls are the most common accident. At present, the existing methods for children’s danger detection are mainly based on wearable sensors, which have a single function and are inconvenient to use. In addition, the existing deep learning methods for fall detection in the home environment are relatively few, and there are still many directions worth exploring.

Therefore, this paper proposes a monitoring system for children’s dangerous behaviors based on deep learning technology, which aims to recognize children’s
actions and detect falls in real time. The system divides the children’s posture into five categories: standing posture, lying up, lying down, sitting posture and falling posture. Among them, falling is regarded as the most important action. When the system detects the occurrence of a fall, it will quickly send an alarm or notify the parents or guardians.

Since there is no publicly available children’s action data set, this article collects real children’s videos on the Internet, a total of 1006 videos, Contains videos for children from various perspectives in the home environment. And with continuous images as input, using technologies such as deep learning, image processing algorithms, and skeleton detection, It can recognize the actions of children and further detect the occurrence of falls.

The accuracy of this system in motion recognition is 89.1%. The precision in fall recognition is 76.1%, and the recall is 81.6%. The effect of instant recognition can also be achieved in terms of execution speed. These results prove that the system can effectively monitor children’s dangerous behaviors, and it also has potential application value in fields such as child care and safety monitoring.
關鍵字(中) ★ 幼兒危險偵測
★ 居家安全
★ 深度學習
★ 影像處理
★ 動作辨識
關鍵字(英) ★ infant risk detection
★ home safety
★ deep learning
★ image processing
★ action recognition
論文目次 摘要i
Abstract ii
目錄iv
一、緒論1
1.1 研究動機.................................................................. 1
1.2 研究目的.................................................................. 2
1.3 論文架構.................................................................. 2
二、相關研究以及文獻回顧4
2.1 相關研究.................................................................. 4
2.1.1 居家安全......................................................... 4
2.1.2 感測器監測...................................................... 5
2.1.3 影像式監測...................................................... 7
2.1.4 深度學習網路................................................... 10
2.2 文獻探討.................................................................. 15
2.2.1 成人動作辨識與危險監測之研究........................... 15
2.2.2 現有研究資料集之觀察....................................... 17
2.2.3 骨架偵測演算法應用於幼兒之研究........................ 18

三、研究方法20
3.1 幼兒動作資料集......................................................... 20
3.1.1 資料集蒐集...................................................... 20
3.1.2 資料前處理...................................................... 22
3.1.3 資料增強......................................................... 24
3.2 幼兒居家危險監測系統................................................ 24
3.2.1 系統介紹......................................................... 24
3.2.2 系統流程......................................................... 25
3.3 深度網路模型辨識...................................................... 25
3.4 骨架偵測與辨識演算法................................................ 26
3.4.1 骨架偵測......................................................... 26
3.4.2 骨架資訊蒐集................................................... 27
3.4.3 骨架資訊分析................................................... 29
3.4.4 骨架判斷結果................................................... 30
3.5 後處理機制............................................................... 31
3.5.1 最終決策機制................................................... 31
3.5.2 危險警示機制................................................... 32
四、實驗設計與結果33
4.1 動作辨識準確度實驗................................................... 33
4.1.1 實驗設計與結果................................................ 33
4.1.2 實驗結果分析................................................... 35
4.2 骨架偵測與辨識實驗................................................... 38
4.2.1 實驗設計與結果................................................ 38
4.2.2 實驗結果分析................................................... 39
4.3 系統準確度實驗......................................................... 40
4.3.1 實驗設計與結果................................................ 40
4.3.2 實驗結果分析................................................... 41
4.4 系統執行時間實驗...................................................... 41
4.4.1 實驗設計與結果................................................ 41
4.4.2 實驗結果分析................................................... 42
五、總結43
5.1 結論........................................................................ 43
5.2 未來展望.................................................................. 44
參考文獻.....................................................45
參考文獻 [1] 衛生福利部. “兒少死亡統計,” [Online]. Available: https://crc.sfaa.gov.tw/Statistics/
Detail/11 (visited on 05/30/2023).
[2] 衛生所. “幼兒事故傷害居家環境安全,” [Online]. Available: https://www.eastphc.
taichung.gov.tw/media/454868/7103010285771.pdf (visited on 05/30/2023).
[3] M. Jutila, H. Rivas, P. Karhula, and S. Pantsar-Syväniemi, “Implementation of a wearable
sensor vest for the safety and well-being of children,” Procedia Computer Science,
vol. 32, pp. 888–893, 2014.
[4] Y. Nam and J. W. Park, “Child activity recognition based on cooperative fusion model of
a triaxial accelerometer and a barometric pressure sensor,” IEEE Journal of Biomedical
and Health Informatics, vol. 17, no. 2, pp. 420–426, 2013.
[5] A. Jatti, M. Kannan, R. Alisha, P. Vijayalakshmi, and S. Sinha, “Design and development
of an iot based wearable device for the safety and security of women and girl children,”
in 2016 IEEE International Conference on Recent Trends in Electronics, Information &
Communication Technology (RTEICT), IEEE, 2016, pp. 1108–1112.
[6] H. Na, S. F. Qin, and D. Wright, “Young children's fall prevention based on computer
vision recognition,” in Proceedings of the 6th WSEAS international conference on
Robotics, control and manufacturing technology, 2006, pp. 193–198.
[7] Q. Nie, X. Wang, J. Wang, M. Wang, and Y. Liu, “A child caring robot for the dangerous
behavior detection based on the object recognition and human action recognition,”
in 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), IEEE,
2018, pp. 1921–1926.
[8] T. Nose, K. Kitamura, M. Oono, Y. Nishida, and M. Ohkura, “Data-driven child behavior
prediction system based on posture database for fall accident prevention in a daily living
space,” Journal of Ambient Intelligence and Humanized Computing, vol. 11, pp. 5845–
5855, 2020.
[9] D. Osokin, “Real-time 2d multi-person pose estimation on cpu: Lightweight openpose,”
arXiv preprint arXiv:1811.12004, 2018.
[10] F. Zhang, L. Cui, and H. Wang, “Research on children’s fall detection by characteristic
operator,” in Proceedings of the International Conference on Advances in Image Processing,
ser. ICAIP ’17, Bangkok, Thailand: Association for Computing Machinery, 2017.
[11] D. A. Reynolds et al., “Gaussian mixture models.,” Encyclopedia of biometrics, vol. 741,
no. 659-663, 2009.
[12] T. Joachims, “Making large-scale svm learning practical,” Technical report, Tech. Rep.,
1998.
[13] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional
neural networks,” in Advances in neural information processing systems, 2012,
pp. 1097–1105.
[14] J. Donahue, L. Anne Hendricks, S. Guadarrama, et al., “Long-term recurrent convolutional
networks for visual recognition and description,” in Proceedings of the IEEE
conference on computer vision and pattern recognition, 2015, pp. 2625–2634.
[15] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation,
vol. 9, no. 8, pp. 1735–1780, 1997.
[16] X. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-c. Woo, “Convolutional
lstm network: A machine learning approach for precipitation nowcasting,” Advances in
neural information processing systems, vol. 28, 2015.
[17] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning spatiotemporal
features with 3d convolutional networks,” in Proceedings of the IEEE international
conference on computer vision, 2015, pp. 4489–4497.
[18] C. Feichtenhofer, “X3d: Expanding architectures for efficient video recognition,” in Proceedings
of the IEEE/CVF conference on computer vision and pattern recognition, 2020,
pp. 203–213.
[19] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in
Proceedings of the IEEE conference on computer vision and pattern recognition, 2016,
pp. 770–778.
[20] W. Kay, J. Carreira, K. Simonyan, et al., “The kinetics human action video dataset,”
arXiv preprint arXiv:1705.06950, 2017.
[21] N. Lu, Y. Wu, L. Feng, and J. Song, “Deep learning for fall detection: Three-dimensional
cnn combined with lstm on video kinematic data,” IEEE journal of biomedical and health
informatics, vol. 23, no. 1, pp. 314–323, 2018.
[22] N. A. Saidin and S. A. Shukor, “An analysis of kinect-based human fall detection system,”
in 2020 IEEE 8th Conference on Systems, Process and Control (ICSPC), IEEE,
2020, pp. 220–224.
[23] I. Kareem, S. F. Ali, and A. Sheharyar, “Using skeleton based optimized residual neural
network architecture of deep learning for human fall detection,” in 2020 IEEE 23rd
International Multitopic Conference (INMIC), IEEE, 2020, pp. 1–5.
[24] S. Suzuki, Y. Amemiya, and M. Sato, “Enhancement of child gross-motor action recognition
by motional time-series images conversion,” in 2020 IEEE/SICE International
Symposium on System Integration (SII), IEEE, 2020, pp. 225–230.
[25] S. Suzuki, Y. Amemiya, and M. Sato, “Enhancement of gross-motor action recognition
for children by cnn with openpose,” in IECON 2019-45th Annual Conference of the IEEE
Industrial Electronics Society, IEEE, vol. 1, 2019, pp. 5382–5387.
[26] Y. Zhang, Y. Tian, P. Wu, and D. Chen, “Application of skeleton data and long shortterm
memory in action recognition of children with autism spectrum disorder,” Sensors,
vol. 21, no. 2, p. 411, 2021.
[27] R.-D. Vatavu, “The dissimilarity-consensus approach to agreement analysis in gesture
elicitation studies,” in Proceedings of the 2019 CHI Conference on Human Factors in
Computing Systems, 2019, pp. 1–13.
[28] S. Rajagopalan, A. Dhall, and R. Goecke, “Self-stimulatory behaviours in the wild for
autism diagnosis,” in Proceedings of the IEEE International Conference on Computer
Vision Workshops, 2013, pp. 755–761.
[29] A. Aloba, G. Flores, J. Woodward, et al., “Kinder-gator: The uf kinect database of child
and adult motion.,” in Eurographics (Short Papers), 2018, pp. 13–16.
[30] S. Mohottala, S. Abeygunawardana, P. Samarasinghe, D. Kasthurirathna, and C. Abhayaratne,
“2d pose estimation based child action recognition,” in TENCON 2022-2022
IEEE Region 10 Conference (TENCON), IEEE, 2022, pp. 1–7.
[31] J. Carreira, E. Noland, A. Banki-Horvath, C. Hillier, and A. Zisserman, “A short note
about kinetics-600,” arXiv preprint arXiv:1808.01340, 2018.
[32] A. Turarova, A. Zhanatkyzy, Z. Telisheva, A. Sabyrov, and A. Sandygulova, “Child action
recognition in rgb and rgb-d data,” in Companion of the 2020 ACM/IEEE International
Conference on Human-Robot Interaction, 2020, pp. 491–492.
[33] G. Sciortino, G. M. Farinella, S. Battiato, M. Leo, and C. Distante, “On the estimation of
children's poses,” in Image Analysis and Processing-ICIAP 2017: 19th International
Conference, Catania, Italy, September 11-15, 2017, Proceedings, Part II 19, Springer,
2017, pp. 410–421.
[34] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele, “2d human pose estimation: New
benchmark and state of the art analysis,” in IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), Jun. 2014.
[35] L. Sigal, A. O. Balan, and M. J. Black, “Humaneva: Synchronized video and motion
capture dataset and baseline algorithm for evaluation of articulated human motion,” International
journal of computer vision, vol. 87, no. 1-2, p. 4, 2010.
[36] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu, “Human3. 6m: Large scale datasets
and predictive methods for 3d human sensing in natural environments,” IEEE transactions
on pattern analysis and machine intelligence, vol. 36, no. 7, pp. 1325–1339, 2013.
[37] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale
hierarchical image database,” in 2009 IEEE conference on computer vision and pattern
recognition, Ieee, 2009, pp. 248–255.
[38] C. Lugaresi, J. Tang, H. Nash, et al., “Mediapipe: A framework for building perception
pipelines,” arXiv preprint arXiv:1906.08172, 2019.
48
指導教授 蘇木春(Mu-Chun Su) 審核日期 2023-8-11
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明