博碩士論文 985201100 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:116 、訪客IP:13.58.238.62
姓名 黃識夫(Shih-fu Huang)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 應用Kinect之人體多姿態辨識
(Human's posture recognition by using Kinect)
相關論文
★ 直接甲醇燃料電池混合供電系統之控制研究★ 利用折射率檢測法在水耕植物之水質檢測研究
★ DSP主控之模型車自動導控系統★ 旋轉式倒單擺動作控制之再設計
★ 高速公路上下匝道燈號之模糊控制決策★ 模糊集合之模糊度探討
★ 雙質量彈簧連結系統運動控制性能之再改良★ 桌上曲棍球之影像視覺系統
★ 桌上曲棍球之機器人攻防控制★ 模型直昇機姿態控制
★ 模糊控制系統的穩定性分析及設計★ 門禁監控即時辨識系統
★ 桌上曲棍球:人與機械手對打★ 麻將牌辨識系統
★ 相關誤差神經網路之應用於輻射量測植被和土壤含水量★ 三節式機器人之站立控制
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本論文的研究目的為應用裝設於房間內的 Kinect 感測器擷取目標人體,辨識五種人體姿態,並應用水平投影量、星狀骨骼化、類神經網路、相似特徵處理的技術,來加以辨識五種人體姿態,站姿、坐姿、彎腰、跪姿以及躺姿。當Kinect 抓取到人體影像後,藉由深度資訊的改變,把人體形狀輪廓從背景中分離出來。接著使用水平投影量來辨識姿態是否為跪姿;如不為跪姿,則使用星狀骨骼化計算出五組由人體重心點至輪廓特徵點的特徵向量。五組的特徵向量與深度資訊為 Learning Vector Quantization (LVQ) [20]類神經網路的輸入,以訓練姿態辨識的權重值;接著LVQ網路的輸出可辨識站姿、正向坐姿、非正向坐姿、彎腰以及躺姿。對於站姿與非正向坐姿,將會經過相似特徵處理程序,藉由水平與垂直投影量,濾除手部干擾,計算出人體長寬比再加以辨識,以提高辨識率。姿態辨識系統在不同的室內環境下,對於不同體型、距離 Kinect 遠近不同的人體,都可達到即時且穩定的姿態辨識。因此辨識系統可實際應用在居家看護與遊憩環境照顧。
摘要(英) The objective of this study is to recognize human’s five postures which are captured from the set of Kinect. By using horizontal projection, star skeleton, neural network, and similar feature process techniques, five human’s postures, which contain standing, sitting, bending, keeling and lying, are recognized. After Kinect captures the picture of a human, a silhouette contour of the human is segmented from the background based on the difference of depth data between the human’s body and background. Then the horizontal projection is utilized to identify whether the posture is keeling or not. If it is not a kneeling posture, a star skeleton is used to calculate five maximum distances from the feature points to the centroid of the human body. The five branches of the star skeleton and depth data are the inputs to train the network of Learning Vector Quantization (LVQ). Subsequently, the outputs of the LVQ are utilized to recognize the five postures including standing, forward sitting, non-forward sitting, bending, and lying. The standing and non-forward sitting postures are processed by the similar feature process based on the horizontal and vertical projection. The hand-shaking disturbance is filtered to calculate the length and breadth ratio of human so as to improve the ratio of posture recognition. The posture recognition system can not only be applied to different indoor environments and different distances between Kinect and human, but also achieve the goal of real-time and stable posture recognition for different human physiques. Therefore, the system can be practically applied to home nursing and amusement place care.
關鍵字(中) ★ 人體姿態辨識
★ Kinect
關鍵字(英) ★ Posture recognition
★ Kinect
論文目次 摘要 .................................................................................................................. i
Abstract ........................................................................................................... ii
致謝 ................................................................................................................ iii
目錄 ................................................................................................................ iv
圖目錄 ............................................................................................................ vi
表目錄 ............................................................................................................ ix
第一章 緒論 ................................................................................................... 2
1.1 研究背景與動機 .............................................................................. 2
1.2 文獻回顧 .......................................................................................... 3
1.3 論文目標與貢獻 .............................................................................. 5
1.4 本文架構與人體姿態辨識流程簡介 .............................................. 5
第二章 硬體架構與軟體開發環境 ............................................................. 11
2.1 硬體架構 ........................................................................................ 12
2.2 軟體開發環境 ................................................................................ 15
第三章 影像前處理 ..................................................................................... 19
3.1 人形影像擷取與雜訊之消除 ......................................................... 20
3.1.1 應用Kinect 擷取人形影像 ................................................. 20
3.1.2 消除影像雜訊 ..................................................................... 22
3.2 人形影像特徵擷取 ........................................................................ 22
3.2.1 水平投影量特徵擷取 ......................................................... 22
3.2.2 星狀骨骼化特徵擷取 ......................................................... 24
第四章 人體姿態辨識 ................................................................................. 34
4.1 姿態辨識流程 ................................................................................ 35
4.2 跪姿辨識 ........................................................................................ 39
4.3 正向坐姿、彎腰、躺姿辨識 ........................................................ 39
4.3.1 LVQ 類神經網路及其架構 ................................................ 39
4.3.2 訓練資料之選取與排序 ..................................................... 40
4.3.3 網路權重之訓練與成果 ...................................................... 44
4.4 站姿與非正向坐姿辨識 ................................................................. 46
4.4.1 人體寬度估測 ...................................................................... 48
4.4.2 人體高度估測 ...................................................................... 50
4.4.3 藉由人體長寬比辨識人體姿態 .......................................... 51
第五章 實驗成果 ......................................................................................... 54
5.1 靜態姿態辨識 ................................................................................ 55
5.1.1 不同距離下之實驗結果 ..................................................... 55
5.1.2 不同體型下之實驗結果 ..................................................... 59
5.2 即時姿態辨識測試 ........................................................................ 63
5.3 姿態辨識速度測試 ......................................................................... 68
第六章 結論與未來展望 ............................................................................. 69
6.1 結論 ................................................................................................ 70
6.2 未來展望 ........................................................................................ 72
6.2.1 論文發表 ............................................................................. 73
參考文獻 ....................................................................................................... 74
參考文獻 [1] 內政部統計處老年人人口比例統計。2011年5月20號,取自http://www.moi.gov.tw/chi/chi_news/news.aspx?time=2011%2f5%2f25+%e4%b8%8b%e5%8d%88+02%3a50%3a18&search_unit=04。
[2] Kinect 感測器原廠首頁。2011年1月10號,http://www.primesense.tw/。
[3] 李乾丞(陳永耀教授指導), ”Fast Human Posture Recognition by Heuristic Rules”, 國立台灣大學碩士論文,2006年。
[4] C. C. Li, and Y. Y. Chen, “Human posture recognition by simple rules,” in Proceedings of IEEE Systems Man Cybernetics, Oct. 2006, pp. 3237-3240.
[5] B. Castiello, T. D’Orazio, A. M. Fanelli, P. Spagnolo, and M. A. Torsello, “A model-free approach for posture classification,” in Proceedings of IEEE Advanced Video and Signal Based Surveillance, Sep. 2005, pp. 276-281.
[6] B. Boulay, F. Bremond, and M. Thonnat, “Posture recognition with a 3D humanoid model,” in Proceedings of IEE International Symposium on Imaging for Crime Detection and Prevention, Jun. 2005, pp. 135-138.
[7] J. W. Hsieh, C. H. Chuang, S. Y. Chen, C. C. Chen, and K. C. Fan, “Segmentation of human body parts using deformable triangulation,” IEEE Transaction on System, Man, and Cybernetics, Part A: Systems and Humans, vol. 40, no. 3, pp. 596-610, May. 2010.
[8] C. H. Chuang, J. W. Hsieh, L.-W. Tsai, and K. C. Fan, “Human action recognition using star templates and Delaunay triangulation,” in Proceedings of IEEE Intelligent Information Hiding and Multimedia Signal Processing, 2008, pp. 179-182.
[9] C. F. Juang and C. T. Lin, “An online self-constructing neural fuzzy inference network and its applications,” IEEE Transaction on Fuzzy System., vol. 6, no. 1, pp. 12-32, Feb. 1998.
[10] C. F. Juang, and C.M. Chang, “Human body posture classification by a Neural fuzzy network and home care system application,” IEEE Transaction on System, Man, and Cybernetics, Part A: Systems and Humans, vol. 37, no. 6, pp. 984-994, Jan. 2007.
[11] H. S. Chen, H. T. Chen, Y. W. Chen, and S.-Y. Lee, “Human action recognition using star skeleton,” in Proceedings of International Conference on 4th ACM Video surveillance and sensor networks, 2006, pp. 171-178.
[12] R. Cucchiara, C. Grana, A. Prati, and R. Vezzani, “Probabilistic posture classification for human-behavior analysis,” IEEE Transaction on System, Man, and Cybernetics, Part A: Systems and Humans, vol. 35, no. 1, pp. 42-54, Jan. 2005.
[13] J. Gu, X. Ding, S. Wang, and Y. Wu, “Action and gait recognition from recovered 3-D human joints,” IEEE Transaction on System, Man, and Cybernetics, Part B: Cybernetics, vol. 40, no. 4, pp. 1021-1033, Aug. 2010.
[14] C. Wu, and H. Aghajan, “Model-based human posture estimation for gesture analysis in an opportunistic fusion smart camera network,” in Proceedings of IEEE Advanced Video and Signal Based Surveillance, Sep. 2007, pp. 453-458.
[15] M. Quwaider, and S. Biswas, “Body posture identification using hidden markov model with a wearable sensor network,” in Proceedings of ICST 3rd international conference on Body area networks, 2008, pp. 152-159.
[16] H. Harms, O. Amft, and G. Troster, “Estimating posture-recognition performance in sensing garments using geometric wrinkle modeling,” IEEE Transaction on Information Technology in Biomedicine, vol. 14, no. 6, pp. 1436-1445, Nov. 2010.
[17] J. Meyer, B. Arnrich, J. Schumm, and G. Troster, “Design and modeling of a textile pressure sensor for sitting posture classification,” IEEE Transaction on Sensor, vol. 10, no. 8, pp. 1391-1398, Aug. 2010.
[18] D. U. Jeong, S. J. Kim, and W. Y. Chung, “Classification of posture and movement using a 3-axis accelerometer,” in Proceedings of IEEE Convergence Information Technology, Nov. 2007, pp. 837-844.
[19] Mattmann, O. Amft, H. Harms, and G. Troster, “Recognizing upper body postures using textile strain sensors,” in Proceedings of IEEE Wearable Computers, Oct. 2007, pp. 29-36.
[20] LVQ網路介紹。2011年6月24號,
http://en.wikipedia.org/wiki/Learning_Vector_Quantization
[21] 微軟首頁。2011年5月25號,http://www.microsoft.com/zh/tw/default.aspx。
[22] 身體就是控制器,微軟Kinect是怎麼做到的。2010年9月20號,http://www.techbang.com.tw/posts/2936-get-to-know-how-it-works-kinect
[23] Kinect 函式庫首頁。2011年1月30號,
http://idav.ucdavis.edu/~okreylos/ResDev/Kinect/index.html。
[24] Kinect 散斑介紹。2011年6月9號,
http://www.javaforge.com/wiki/103649。
[25] Visual Studio產品 首頁,2011年5月25號,http://www.microsoft.com/express/Downloads/#2010-Visual-CPP。
[26] OpenCV 首頁。2010年6月25號,
http://www.opencv.org.cn/index.php/%E9%A6%96%E9%A1%B5。
[27] 繆紹綱譯,數位影像處理,普林斯頓國際有限公司,2004年。
[28] 陳翔傑(王文俊教授指導),“自動化車牌辨識系統設計”,國立中央大學電機工程研究所碩士論文,2005年6月。
[29] 維基百科之質心介紹。2011年5月28號,
http://zh.wikipedia.org/wiki/%E8%B3%AA%E5%BF%83。
[30] 林汝喆(王文俊教授指導),“麻將牌辨識系統”,國立中央大學電機工程研究所碩士論文,2002年6月。
[31] 維基百科之距離介紹。2011年5月28號,
http://zh.wikipedia.org/wiki/%E8%B7%9D%E9%9B%A2。
指導教授 王文俊(Wen-june Wang) 審核日期 2011-7-1
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明