博碩士論文 102522093 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:24 、訪客IP:18.217.4.250
姓名 邱馨儀(Xin-yi Qiu)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 使用魚眼相機做居家環境中的危險行為偵測
(Dangerous Behavior Detection in Home Environment Using A Fisheye Camera)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 孩童在居家環境中,發生意外的原因除了有環境及設備因素外,還有是因為孩童本身的危險行為所造成的。因此在本研究中,我們提出一個居家環境監控系統,即時監視孩童的位置及行為,並根據事前在居家環境中所設定的各種區域,判定該位置配合該行為是否有異常或危險狀況。如此即能在異常或危險事故產生之當下,及時偵測出來,並提醒家長或監視管理單位注意,以減少可能發生的事故。本研究所偵測的行為有:行走、跑步、跌倒/蹲下、坐下、及站立在不安全的位置。
我們的監控系統是使用一個安裝在天花板下的 360° 魚眼網路攝影機 (fisheye IP camera) 拍攝監視範圍。這個攝影機能夠同時拍攝到 360 度的全域環場影像。本研究偵測移動物體的方法是利用 codebook 背景相減法,此背景模型是利用多階層的背景模型結構來適應環境中有規律性出現的背景。
在跌倒的偵測上,我們以主成份分析 (principal component analysis, PCA) 及框取人體梯形區域高度特徵來判斷跌倒姿勢。使用梯形區域特徵的執行速度比使用 PCA 特徵快,因此我們傾向使用梯形區域的特徵做偵測,但由於在影像中心無法框取梯形區域,因此根據扭曲程度將影像分為外圍與中心兩區塊,各別使用梯形區域特徵與 PCA 特徵判定跌倒。在影像外圍利用梯形區域高度特徵,使用框到人體的梯形高度來判斷跌倒。影像中心先以 PCA 找出人體第一與第二主成份,計算人體在第一與第二主成份方向的固有值比例,判斷是否跌倒。為了減少跌倒偵測的誤判,我們除了使用人體幾何外型特徵外,還加入跌倒後必需有一段靜止的狀態,才是真正要偵測的跌倒行為。在蹲下偵測上,蹲下行為在影像外圍所使用的偵測方法與跌倒偵測相同都是使用梯形高度變化,但由於影像中心自我遮蔽情況嚴重,因此並無法由人在影像中心的外型偵測出蹲下行為。在跑步偵測方面,由於影像中所偵測到的移動速度會受到位置的影響,因此我們首先找出人在影像中的腳點位置,以腳點與影像中心的距離補償影像中的移動速度,再估算出人體移動速度,如果移動速度超過門檻值就判定為跑步發生。
為了使本研究能適用於居家環境,將居家環境分為三個區域: (i) 活動區 (例如,地面)、(ii) 非活動區 (例如,椅子、沙發)、及 (iii) 禁止進入區 (例如,廚房、陽台)。由於孩童在活動區為安全的行為,在非活動區就可能為不安全行為;例如,站立在椅子上。因此我們將根據區域的不同,定義行為是否危險、及偵測出該區域的異常行為。活動區定義的異常行為有跑步與跌倒。非活動區是當發現孩童站立在椅子或沙發上就視為危險行為,而站立在非活動區所使用的特徵有腳點位置與人體梯形高度。禁止進入區是只要發現孩童進入該區域就視為不安全行為發生,因此系統偵測到腳點位置落在該區域就判定為異常行為。其中在非活動區,由於孩童坐下時的梯形高度會與跌倒時的梯形高度符合,在應用時容易將坐下誤判成跌倒,因此針對非活動區特別偵測出坐下這個行為來減少誤判發生。
由實驗結果分析,只使用人體幾何外型特徵與加入靜止狀態判定的跌倒正確率分別為 86.7% 與 90.6%;孩童跑步正確率為 95%;在非活動區站立與坐下的偵測率都為 100%;而進入禁止進入區的偵測率為 100%。
摘要(英) In homes, the child accidents are occurred due to the limitation and block of the home construction and furniture as well as the children‘s own risk behaviors. In this paper, we propose a home environment monitoring system to survey children in homes and other indoor environments to avoid the child accidents. At first, we define three different regions in the house floor: active, inactive, and inhibition regions. Then the proposed system monitors the children’s location and behavior in time. Finally, the proposed system raises warning if there are abnormal or dangerous behaviors of children in various regions. The warnings are different according to different regions. In this study, the considered child’s behaviors include walking, running, falling/squatting, sitting, and standing on unsafe location.
In the proposed surveillance system, we use a 360-degree fisheye camera installing on the ceiling to capture 360-degree surrounding images. We use the codebook background subtraction method to extract the body silhouettes. In the multi-layer background model, we can handle moving background with periodic variation and illumination variations.
In falling detection, we use object directions based on the principal component analysis (PCA) and the height of body trapezoid bounding box as features to determine falling. We prefer to use trapezoid bounding box technique, because it is faster. However, in the center area of the omni-directional images, the trapezoid bounding box is unreliable; thus, we distinguish an image into two concentric regions: inner and outer concentric regions. In outer region, we use the height of trapezoid bounding box to decide falling. In inner region, we use PCA to find the first and second principal components and their ratio to determine falling. In order to increase the accuracy, we add a condition that a falling behavior must be followed by a period of a still status. In squatting detection, we use the same way as falling detection in outer concentric region. In inner concentric region, we can’t detect squatting by body shape, since the self-cover of body is serious. In running detection, we compute the foot position, and use consequent images to estimate the body moving speed. The moving speed is influenced by the position in images; thus, a pre-calibrated process on the acquired speed data is conducted.
In order to make this system be adequate for home environment, the whole home floor is distinguished into three areas: (i) active area (ex, floor), (ii) inactive area (ex, chair and sofa), (iii) inhibition area (ex, kitchen and balcony). Different behaviors only permit in different areas. The abnormal behaviors defined in the active area are running and falling. In the inactive area, the abnormal behavior is standing on chair or sofa. In the inhibition area, no any behavior is permitted; that is, once a foot position is detected in the area, the warning is then raised.
Many experiments were conducted; according to the experimental results, we find that the accuracy rate of falling detection is 86.7% with only using geometric characteristics of body appearances and 90.6% with adding still status determination. Children running detection rate is 95%. The detection rate of standing and sitting in inactive area is 100%. The detection rate of getting into inhibition area is 100%. The results reseal that the proposed system is effect and practical for the related applications.
關鍵字(中) ★ 跌倒偵測
★ 監控系統
★ 魚眼相機
★ 行為辨識
關鍵字(英) ★ fall detection
★ surveillance system
★ fisheye camera
★ behavior recognition
論文目次 摘要 i
Abstract iii
誌謝 v
目錄 vi
圖目錄 viii
表目錄 xi
第一章 緒論 1
1.1 研究動機 1
1.2 系統概述 2
1.3 論文架構 3
第二章 相關研究 5
2.1 移動物偵測 5
2.2 跌倒偵測 6
2.3 行為辨識 8
2.3.1 行為特徵 8
2.3.2 行為分類 14
第三章 物體偵測 17
3.1 codebook 背景模型 17
3.1.1 codebook 背景模型的資料結構 17
3.1.2色彩與亮度的計算 18
3.1.3背景的初始與建構 20
3.1.4前景偵測 21
3.2 主成份分析法 21
3.3 以框住人體的梯形變化偵測人體外型的改變 23
第四章 危險行為偵測 27
4.1 跌倒與蹲下判定 27
4.1.1 影像外圍的跌倒與蹲下偵測 30
4.1.2 影像中心的跌倒偵測 31
4.2 跑步偵測 33
4.2.1 影像中座標與實際座標的轉換 34
4.2.2 計算移動速度 36
4.3 根據不同環境定義異常行為 37
4.3.1 活動區異常行為偵測 38
4.3.2 非活動區與禁止進入區異常行為偵測 38
第五章 實驗 40
5.1 實驗設備與架設環境 40
5.2 異常行為偵測 41
5.2.1 跌倒與蹲下偵測 41
5.2.2 跑步偵測 46
5.3 居家環境危險行為偵測 48
第六章 結論與未來展望 52
參考文獻 53
參考文獻 [1] Aggarwal, J. K. and M. S. Ryoo, "Human activity analysis: A review," ACM Computing Surveys vol.43, no.3, pp.16:1-16:43, 2011.
[2] Ahmad, M. and S. W. Lee, "Human action recognition using shape and CLG-motion flow from multi-view image sequences," Pattern Recognition, vol.41, no.7, pp.2237-2252, 2008.
[3] Barnich, O. and M. Van Droogenbroeck, "ViBe: A universal background subtraction algorithm for video sequences," IEEE Trans. on Image Processing, vol.20, no.6, pp.1709-1724, 2011.
[4] Blank, M., L. Gorelick, E. Shechtman, M. Irani, and R. Basri, "Actions as space-time shapes," in Proc. 10th IEEE Int. Conf. on Computer Vision, ICCV 2005, Beijing, China, Oct.17-21, 2005, pp.1395-1402.
[5] Bobick, A. F. and J. W. Davis, "The recognition of human movement using temporal templates," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.23, no.3, pp.257-267, 2001.
[6] Dollár, P., V. Rabaud, G. Cottrell, and S. Belongie, "Behavior recognition via sparse spatio-temporal features," in Proc. 2nd Joint IEEE Int. Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, VS-PETS, Beijing, China, Oct.15-16, 2005, pp.65-72.
[7] Foroughi, H., A. Naseri, A. Saberi, and H. S. Yazdi, "An eigenspace-based approach for human Fall detection using integrated time motion image and Neural Network," in Proc. 9th Int. Conf. on Signal Processing, Beijing, China, Oct.26-29, 2008, pp.1499-1503.
[8] Huang, S. C., "An advanced motion detection algorithm with video quality analysis for video surveillance systems," IEEE Trans. on Circuits and Systems for Video Technology, vol.21, no.1, pp.1-14, 2011.
[9] Kim, K., T. H. Chalidabhongse, D. Harwood, and L. Davis, "Real-time foreground-background segmentation using codebook model," Real-Time Imaging, vol.11, no.3, pp.172-185, 2005.
[10] Laptev, I. and T. Lindeberg, "Space-time interest points," in Proc. Ninth IEEE Int. Conf. on Computer Vision, Nice, France, Oct.13-16, 2003, pp.432-439.
[11] Laptev, I. and P. Pérez, "Retrieving actions in movies," in Proc. IEEE 11th Int. Conf. on Computer Vision, Rio de Janeiro, Brazil, Oct.14-21, 2007, pp.1-8.
[12] Liang, Y. M., S. W. Shih, A. C. C. Shih, and H. Y. M. Liao, "Learning atomic human actions using variable-length markov models," IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics, vol.39, no.1, pp.268-280, 2009.
[13] Lin, C. W. and Z. H. Ling, "Automatic fall incident detection in compressed video for intelligent homecare," in Proc. 16th Int. Conf. on Computer Communications and Networks, Honolulu, HI, Aug.13-16, 2007, pp.1172-1177.
[14] Lv, F. and R. Nevatia, "Single view human action recognition using key pose matching and viterbi path searching," in Proc. 2007 IEEE Conf. on Computer Vision and Pattern Recognition, CVPR′07, Minneapolis, MN, Jun.17-21, 2007, pp.1-8.
[15] McKenna, S. J. and H. N. Charif, "Summarising contextual activity and detecting unusual inactivity in a supportive home environment," Pattern Analysis and Applications, vol.7, no.4, pp.386-401, 2005.
[16] Miaou, S. G., P. H. Sung, and C. Y. Huang, "A customized human fall detection system using omni-camera images and personal information," in Proc. 1st Transdisciplinary Conf. on Distributed Diagnosis and Home Healthcare, Arlington, VA, Apr.2-4, 2006, pp.39-42.
[17] Mubashir, M., L. Shao, and L. Seed, "A survey on fall detection: Principles and approaches," Neurocomputing, vol.100, pp.144-152, 2013.
[18] Noury, N., A. Fleury, P. Rumeau, A. K. Bourke, G. Ó. Laighin, V. Rialle, and J. E. Lundy, "Fall detection - Principles and methods," in Proc. 29th IEEE Ann. Int. Conf. Engineering in Medicine and Biology Society, Lyon, France, Aug.22-26, 2007, pp.1663-1666.
[19] Pearson, K., "On lines and planes of closest fit to systems of points in space," The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol.2, no.11, pp.559-572, 1901.
[20] Poppe, R., "A survey on vision-based human action recognition," Image and Vision Computing, vol.28, no.6, pp.976-990, 2010.
[21] Ramanan, D., D. A. Forsyth, and A. Zisserman, "Tracking people by learning their appearance," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.29, no.1, pp.65-81, 2007.
[22] Rodriguez, M. D., J. Ahmed, and M. Shah, "Action MACH: A spatio-temporal maximum average correlation height filter for action recognition," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Anchorage, AK, Jun.23-28, 2008, pp.1-8.

[23] Rougier, C., J. Meunier, A. St-Arnaud, and J. Rousseau, "Fall detection from human shape and motion history using video surveillance," in Proc. 21st Int. Conf. on Advanced Information Networking and Applications Workshops/Symposia, Niagara Falls, Canada, May 21-23, 2007, pp.875-880.
[24] Rougier, C., J. Meunier, A. St-Arnaud, and J. Rousseau, "Robust video surveillance for fall detection based on human shape deformation," IEEE Trans. on Circuits and Systems for Video Technology, vol.21, no.5, pp.611-622, 2011.
[25] Schüldt, C., I. Laptev, and B. Caputo, "Recognizing human actions: A local SVM approach," in Proc. the 17th Int. Conf. on Pattern Recognition, Cambridge, UK, Aug.23-26, 2004, pp.32-36.
[26] Weinland, D., E. Boyer, and R. Ronfard, "Action recognition from arbitrary views using 3D exemplars," in Proc. 2007 IEEE 11th Int. Conf. on Computer Vision, Rio de Janeiro, Brazil, Oct.14-21, 2007, pp.1-7.
[27] Weinland, D., R. Ronfard, and E. Boyer, "A survey of vision-based methods for action representation, segmentation and recognition," Computer Vision and Image Understanding, vol.115, no.2, pp.224-241, 2011.
[28] Yamato, J., J. Ohya, and K. Ishii, "Recognizing human action in time-sequential images using hidden markov model," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Champaign, IL, Jun.15-18, 1992, pp.379-385.
[29] Yang, S. W. and S. K. Lin, "Fall detection for multiple pedestrians using depth image processing technique," Computer Methods and Programs in Biomedicine, vol.114, no.2, pp.172-182, 2014.
[30] Yang, W., Y. Wang, and G. Mori, "Recognizing human actions from still images with latent poses," in Proc. 2010 IEEE Conf. on Computer Vision and Pattern Recognition, San Francisco, CA, Jun.13-18, 2010, pp.2030-2037.
[31] Yao, B. Z., B. X. Nie, Z. Liu, and S.-C. Zhu, "Animated pose templates for modeling and detecting human actions," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.36, no.3, pp.436-452, 2014.
[32] Zhu, Y. Y., Y. Y. Zhu, Z. K. Wen, W. S. Chen, and Q. Huang, "Detection and recognition of abnormal running behavior in surveillance video," Mathematical Problems in Engineering, vol.2012, pp.1-14, 2012.
指導教授 曾定章 審核日期 2015-7-27
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明