博碩士論文 965402011 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:19 、訪客IP:3.142.12.240
姓名 李新民(Hsin-Min Lee)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 應用相鄰最近特徵空間轉換法於跌倒偵測
(Fall Detection Using Nearest Neighbor Feature Line Embedding)
相關論文
★ 使用視位與語音生物特徵作即時線上身分辨識★ 以影像為基礎之SMD包裝料帶對位系統
★ 手持式行動裝置內容偽變造偵測暨刪除內容資料復原的研究★ 基於SIFT演算法進行車牌認證
★ 基於動態線性決策函數之區域圖樣特徵於人臉辨識應用★ 基於GPU的SAR資料庫模擬器:SAR回波訊號與影像資料庫平行化架構 (PASSED)
★ 利用掌紋作個人身份之確認★ 利用色彩統計與鏡頭運鏡方式作視訊索引
★ 利用欄位群聚特徵和四個方向相鄰樹作表格文件分類★ 筆劃特徵用於離線中文字的辨認
★ 利用可調式區塊比對並結合多圖像資訊之影像運動向量估測★ 彩色影像分析及其應用於色彩量化影像搜尋及人臉偵測
★ 中英文名片商標的擷取及辨識★ 利用虛筆資訊特徵作中文簽名確認
★ 基於三角幾何學及顏色特徵作人臉偵測、人臉角度分類與人臉辨識★ 一個以膚色為基礎之互補人臉偵測策略
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 由於年紀越大的人身體反應也相對地越遲緩,使得跌倒一直成為年長者意外死亡的主要原因。自動化跌倒偵測的技術若能整合到健康照護系統可以幫助人們知道跌倒的發生,進而及時提供適當的救助,特別是在昏暗的環境中,更容易成為照顧的死角。在本研究中,一種主要用於昏暗環境中的跌倒偵測被提出。處於昏暗的環境中,亮度的突然改變使得傳統的CCD攝影機影像無法完美地擷取人體輪廓。因此我們採用了熱像儀來偵測人體。所提出的方法採用由粗略到繁複的策略。首先,在粗略的階段,從熱像儀的影像中擷取向下的光流特徵,以此識別出類似跌倒的動作。然後,在繁複的階段,從類似跌倒的動作中擷取運動歷史影像(MHI)的水平投影,應用相鄰最近特徵空間轉換法(NNFLE)來驗證該事件。實驗結果顯示,我們提出的方法即使在昏暗的環境中與多人重疊的狀況下都可以非常精確地區分出跌倒事件。
摘要(英) Accidental fall is the most prominent factor that causes the accidental death of elder people due to their slow body reaction. Automatic fall detection technology integrated in a health care system can assist human monitoring the occurrence of fall, especially in dusky environments. In this study, a novel fall detection system focusing mainly on dusky environments is proposed. In dusky environments, the silhouette images of human bodies extracted from conventional CCD cameras are usually imperfect due to the abrupt change of illumination. Thus, our work adopts a thermal imager to detect human bodies. The proposed approach adopts a coarse-to-fine strategy. Firstly, the downward optical flow features are extracted from the thermal images to identify fall-like actions in the coarse stage. The horizontal projection of motion history images (MHI) extracted from fall-like actions are then designed to verify the incident by the proposed nearest neighbor feature line embedding (NNFLE) in the fine stage. Experimental results demonstrate that the proposed method can distinguish the fall incidents with high accuracy even in dusky environments and overlapping situations.
關鍵字(中) ★ 跌倒偵測 關鍵字(英) ★ Fall detection
★ Optical flow
★ Motion history image
★ Nearest feature line
★ Nearest neighbor feature line
論文目次 Content
摘要.......................................................V
Abstract..................................................VI
誌謝.....................................................VII
Chapter 1:Introduction....................................1
1.1 Motivation............................................1
1.2 Organization of the Dissertation......................6
Chapter 2:Review of Related Works.........................7
2.1 A Review of Eigenspace Approach.......................7
2.1.1 Linear Discriminant Analysis(LDA)...................8
2.1.2 Local Structure Preserving Algorithm................9
2.1.3 Optimization of the Fisher Criterion...............11
2.1.4 Discriminative Common Vectors(DCV).................13
2.2 Nearest Feature Line Embedding (NFLE)................14
Chapter 3:The Proposed Fall Detection Mechanism..........21
3.1 Human body extraction.................................23
3.2 Optical flow in the coarse stage......................24
3.3 Motion history image in the fine stage................28
3.4 Nearest Neighbor Feature Line Embedding (NNFLE).......30
Chapter4:Experimental Results............................35
4.1. Performance of various fall detection algorithms.....36
4.2 The identification capability of coarse-to-fine verifier..................................................37
4.3 Performance evaluation of fall detection under overlapping situations....................................38
Chapter 5:Conclusions and Future Works...................39
References................................................40

List of Figure
Fig. 1: The thermal imager.................................2
Fig. 2: Image extraction results captured by (a) CCD camera, (b) thermal imager.................................2
Fig. 3: Projection of NFL.................................15
Fig. 4: Training algorithm for the NFLE transformation....19
Fig. 5: Flow diagram in training and testing the fall detector..................................................22
Fig. 6: Human body extraction. (a) Temperature gray level images, (b) binarization results, and (c) morphological closing operation results.................................23
Fig. 7: The histogram of vertical components of optical flow of (a) walking, (b) falling down.....................25
Fig. 8: The region for optical flow estimation: (a) non-overlapping, (b) overlapping..............................26
Fig. 9: Fall incident in overlapping situation. The first row is the silhouettes, the second row is the corresponding optical flow results, and the third row is the histograms of vertical components of optical flows. (a) The results generated by original method, (b) the results generated by using dividing method.....................................27
Fig. 10: MHI motion template: (a) walk, (b) fall..........28
Fig. 11: Fine stage feature vector extraction: (a) MHI of walk, (b) horizontal projection of walk MHI, (c) the obtained fine stage feature vector from walk MHI, (d) MHI of fall, (e) horizontal projection of fall MHI, and (f) the obtained fine stage feature vector from fall MHI..........29
Fig. 12: (a) An extrapolation error, (b) an interpolation error.....................................................32
Fig. 13: Training algorithm for the NNFLE transformation..33

List of Table
Table 1: The data sets used in the experiments............35
Table 2: The fall detection performance on the data set. (%).......................................................36
Table 3: The identification capability of coarse stage and fine stage of the proposed method.........................37
Table 4: The performance evaluation of fall detection under overlapping situations. (%)...............................38
參考文獻 [1] K. C. Moylan, E. F. Binder, “Falls in older adults: Risk assessment, management and prevention,” The American Journal of Medicine, vol. 120, no. 6, pp. 493-497, 2007.
[2] L. Larson, T. F. Bergmann, “Taking on the fall: The etiology and prevention of falls in the elderly,” Clinical Chiropractic, vol. 11, no. 3, pp. 148-154, 2008.
[3] S. GS, “Falls among the elderly: Epidemiology and prevention,” American Journal of Preventive Medicine, vol. 4, no. 5, pp. 282-288, 1988.
[4] J. Tao, M. Turjo, M.-F. Wong, M. Wang, Y.-P. Tan, “Fall incidents detection for intelligent video surveillance,” Proceedings of the 15th International Conference on Communications and Signal Processing, pp. 1590-1594, 2005.
[5] D. Anderson, J. M. Keller, M. Skubic, X. Chen, Z. He, “Recognizing falls from silhouettes,” Proceedings of the 28th IEEE EMBS Annual International Conference, 2006.
[6] C F. Juang, C M. Chang, “Human body posture classification by neural fuzzy network and home care system applications,” IEEE Transactions on Systems, Man, and Cybernetics Part A, vol. 37, no. 6, pp. 984-994, 2007.
[7] H. Foroughi, N. Aabed, A. Saberi, H. S. Yazdi, “An eigenspace-based approach for human fall detection using integrated time motion image and neural networks,” Proceedings of the IEEE International Conference on Signal Processing , 2008.
[8] C. Rougier, J. Meunier, A. ST. Arnaud, J. Rousseau, “Fall detection from human shape and motion history using video surveillance,” Proceedings of the 21st International Conference on Advanced Information Networking and Applications Workshops, vol. 2, pp. 875-880, 2007.
[9] H. Foroughi, A. Rezvanian, A. Paziraee, “Robust fall detection using human shape and multi-class support vector machine,” Proceedings of the Sixth Indian Conference on Computer Vision, Graphics and Image Processing, 2008.
[10] C. L. Liu, C. H. Lee, P. Lin, “A fall detection system using k-nearest neighbor classifier,” Expert Systems with Applications, vol. 37, no. 10, pp. 7174-7181, 2010.
[11] Y. T. Liao, C. L. Huang, S. C. Hsu, “Slip and fall event detection using Bayesian Belief Network,” Pattern Recognition, vol. 45, pp. 24-32, 2012.
[12] D. N. Olivieri, I. G. Conde, X. A. V. Sobrino, “Eigenspace-based fall detection and activity recognition from motion templates and machine learning,” Expert Systems with Applications, vol. 39, no. 5, pp. 5935-5945, 2012.
[13] Y. N. Chen, C. C. Han, C. T. Wang, K. C. Fan, “Face recognition using nearest feature space embedding,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 6, pp. 1073-1086, 2012.
[14] X. He, S. Yan, Y. Ho, P. Niyogi and H. J. Zhang, “Face recognition using Laplacianfaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 3, pp. 328-340, 2005.
[15] S. T. Roweis and L. K. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol. 290, no. 22, pp. 2323-2326, 2000.
[16] L. K. Saul and S. T. Roweis, “Think globally, fit locally: Unsupervised learning of low dimensional manifolds,” Journal of Machine Learning Research, vol. 4, pp. 119-155, 2003.
[17] D. Cai, X. He, J. Han, and H. Zhang, “Orthogonal Laplacianfaces for face recognition,” IEEE Transactions on Image Processing, vol. 15, no. 11, pp. 3608-3614, 2006.
[18] S. Yan, D. Xu, B. Zhang, H. J. Zhang, Q. Yang, and S. Lin, “Graph embedding and extensions: General framework for dimensionality reduction,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 1, pp. 40-51, 2007.
[19] J. B. Li, J. S. Pan, and S. C. Chu, “Kernel class-wise locality preserving projection,” Journal of Information Sciences, vol. 178, pp. 1825-1835, 2008.
[20] H. F. Hu, “Orthogonal neighborhood preserving discriminant analysis for face recognition,” Pattern Recognition, vol. 41, pp. 2045-2054, 2008.
[21] S. Z. Li, J. Lu, “Face recognition using the nearest feature line method,” IEEE Transactions on Neural Networks, vol. 10, no. 2, pp. 439-433, 1999.
[22] G. Wu, “Distinguishing fall activities from normal activities by velocity characteristics,” Journal of Biomechanics, vol. 33, no. 11, pp. 1497-1500, 2000.
[23] C. M. Wang, K. C. Fan, C. T. Wang, “Estimating otpical flow by integrating multi-frame information,” Journal of Information Science and Engineering, vol. 24, no. 6, pp. 1719-1731, 2008.
[24] A. F. Bobick, J. W. Davis, “The recognition of human movement using temporal templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 3, pp. 257-267, 2001.
指導教授 范國清、陳映濃(Kuo-Chin Fan Ying-Nong Chen) 審核日期 2016-8-30
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明