博碩士論文 100522062 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:29 、訪客IP:18.222.162.216
姓名 黃郁如(Yu-Ju Huang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 以人臉五官特徵為基礎之適應型性別辨識
(An Adaptive Method for Gender Recognition Based on Facial Components)
相關論文
★ 使用視位與語音生物特徵作即時線上身分辨識★ 以影像為基礎之SMD包裝料帶對位系統
★ 手持式行動裝置內容偽變造偵測暨刪除內容資料復原的研究★ 基於SIFT演算法進行車牌認證
★ 基於動態線性決策函數之區域圖樣特徵於人臉辨識應用★ 基於GPU的SAR資料庫模擬器:SAR回波訊號與影像資料庫平行化架構 (PASSED)
★ 利用掌紋作個人身份之確認★ 利用色彩統計與鏡頭運鏡方式作視訊索引
★ 利用欄位群聚特徵和四個方向相鄰樹作表格文件分類★ 筆劃特徵用於離線中文字的辨認
★ 利用可調式區塊比對並結合多圖像資訊之影像運動向量估測★ 彩色影像分析及其應用於色彩量化影像搜尋及人臉偵測
★ 中英文名片商標的擷取及辨識★ 利用虛筆資訊特徵作中文簽名確認
★ 基於三角幾何學及顏色特徵作人臉偵測、人臉角度分類與人臉辨識★ 一個以膚色為基礎之互補人臉偵測策略
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 性別是人類一項重要的特徵,一個自動且有效的性別辨識系統,可以廣泛的應用在安全監控、人機介面或消費者行為分析等領域,例如在消費者行為分析方面,將數位廣告看板加上性別辨識系統,根據不同的性別播放不同的廣告,以吸引更多潛在的顧客。
不同性別之間的人臉外觀有顯著的差異,因此利用人臉特徵來判斷性別是最直覺的方法,近年來,許多利用人臉與五官特徵來辨識性別的方法相繼被提出,但是在真實的環境中,路上的行人經常會有像是戴墨鏡或是圍圍巾的裝扮,這些人臉部分遮蔽的情形可能導致辨識率下降,因此本論文提出一個適應型的性別辨識方法,首先在遮蔽分類的階段偵測出被遮蔽的五官區塊,然後只選擇未遮蔽的區塊來辨識性別,利用這種動態選擇五官區塊的方式來解決因部分遮蔽所造成的誤判並且提升整體的辨識率。
實驗方面,從三個部分來探討,分別是各區塊的性別辨識結果、綜合各區塊辨識結果的適應型性別辨識以及利用部分遮蔽影像來測試系統的強健性,實驗結果顯示,在未遮蔽或有部分遮蔽的人臉影像中,本論文提出的性別辨識方法皆能維持良好的辨識率。
摘要(英) Gender is a very important personal attribute inherent in human beings. Hence, an automatic and effective gender recognition system is desirable in various applications, such as intelligent surveillance system, human-computer interaction, and customer behavior analysis. Take customer behavior analysis for example, the applying of gender recognition technology in digital signage can attract potential customers while demonstrating custom advertisements.
Since human face conveys a clear sexual dimorphism, the using of facial features seems to be an intuitive way for gender recognition. In consequence, abundant gender recognition methods were proposed based on certain facial components. However, people may wear sunglasses or scarves in real life which will result in occluded facial image. Under these circumstances, the performance of gender recognition will thus be degraded due to the occlusion problem. In this thesis, we propose an adaptive gender recognition method by dynamically selecting facial components to cope with the aforementioned problem. In the occlusion classification procedure, the occluded facial components are firstly detected and only the non-occluded facial components are selected in the recognition role to overcome the failure problem suffering the recognition performance.
Three different experiments were conducted to verify the validity of our proposed method. They were categorized in terms of single classifier of components, multiple classifiers for adaptive gender recognition, and system robustness test by using partial occlusion image. Experimental results demonstrate that the proposed method exhibit excel accuracy under various conditions.
關鍵字(中) ★ 性別辨識
★ 適應行決策融合
★ 主動形狀模型
★ 隨機森林
關鍵字(英) ★ gender recognition
★ adaptive decision making
★ active shape model
★ random forest
論文目次 摘要 i
Abstract ii
誌謝 iii
目錄 iv
圖目錄 vi
表目錄 viii
第一章 緒論 1
1.1 研究動機與目的 1
1.2 系統流程 3
1.3 論文架構 6
第二章 文獻探討 7
2.1 特徵擷取與分類方法 7
2.2 以區域特徵為基礎 10
第三章 人臉與五官偵測 14
3.1 人臉特徵點定位 14
3.1.1 主動形狀模型(ASM) 14
3.1.2 比較ASM與AAM 18
3.2 五官選取與影像正規化 19
3.2.1 影像旋轉 19
3.2.2 五官區塊選取與正規化 21
第四章 性別辨識 23
4.1 性別分類方法 23
4.1.1 頭髮偵測 24
4.1.2 使用PCA降維 26
4.1.3 以Random Forest做性別分類 27
4.2 特徵遮蔽判定 29
4.2.1 擷取HOG特徵 29
4.2.2 以SVM做遮蔽分類 30
4.3適應型決策融合 33
第五章 實驗結果與分析 35
5.1 人臉影像資料庫 35
5.2 單一分類器實驗結果 37
5.3 適應型性別分類實驗結果 38
5.4 部分遮蔽測試 46
第六章 結論與未來研究方向 51
6.1 結論 51
6.2 未來研究方向 52
參考文獻 53
參考文獻 [1] R. C. Luo, T.-T. Lin, and M.-C. Tsai, “Gender classification based on multi-classifiers fusion for human-robot interaction,” International Symposium on Industrial Electronics, pp. 796-800, 2011.
[2] C. B. Ng, Y. H. Tay, and B.-M. Goi, “Recognizing human gender in computer vision: a survey,” Pacific Rim International Conference on Artificial Intelligence, vol. 7458, pp. 335-346, 2012.
[3] J.-M. Fellous, “Gender discrimination and prediction on the basis of facial metric information,” Vision Research, 37(14), pp. 1961-1973, 1997.
[4] S. Mozaffari, H. Behravan, and R. Akbari, “Gender classification using single frontal image per person: combination of appearance and geometric based features,” International Conference on Pattern Recognition, pp. 1192-1195, 2010.
[5] H.-C. Lian and B.-L. Lu, “Multi-view gender classification using local binary patterns and support vector machines,” International Symposium on Neural Networks, pp. 202-209, 2006.
[6] C. Shan, “Learning local binary patterns for gender classification on real-world face images,” Pattern Recognition Letters, 33(4), pp. 431-437, 2012.
[7] B. Li, X.-C. Lian, and B.-L. Lu, “Gender classification by combining clothing, hair and facial component classifiers,” Neurocomputing, 76(1), pp. 18-27, 2012.
[8] B. Xia, H. Sun, and B.-L. Lu, “Multi-view gender classification based on local gabor binary mapping pattern and support vector machines,” International Joint Conference on Neural Networks, pp. 3388-3395, 2008.
[9] L. A. Alexandre, “Gender recognition: a multiscale decision fusion approach,” Pattern Recognition Letters, 31(11), pp. 1422-1427, 2010.
[10] P.-H. Lee, J.-Y. Hung, and Y.-P. Hung, “Automatic gender recognition using fusion of facial strips,” International Conference on Pattern Recognition, pp.1140-1143, 2010.
[11] S. Y. D. Hu, Brendan Jou, Aaron Jaech, and Marios Savvides, “Fusion of region-based representations for gender identification,” International Joint Conference on Biometrics, pp. 1-7, 2011.
[12] S. Milborrow, “Locating facial features with active shape models,” Master’s Thesis, University of Cape Town, 2007.
[13] S. Milborrow and F. Nicolls, “Locating facial features with an extended active shape model,” European Conference on Computer Vision, pp. 504-513, 2008.
[14] T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active shape models - their training and application,” Computer Vision and Image Understanding, 61(1), pp. 38-59, 1995.
[15] T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” European Conference on Computer Vision, vol. 2, pp. 484-498, 1998.
[16] T. F. Cootes, G. Edwards, and C. J. Taylor, “Comparing active shape models with active appearance models,” British Machine Vision Conference, pp. 173-182, 1999.
[17] L. Breiman, “Random forests,” Machine Learning, 45(1), pp. 5-32, 2001.
[18] A. Verikas, A. Gelzinis, and M. Bacauskiene, “Mining data with random forests: A survey and results of new tests,” Pattern Recognition, 44(2), pp. 330-349, 2011.
[19] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” Computer Vision and Pattern Recognition, vol. 1, pp. 886-893, 2005.
[20] C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, 20(3), pp. 273-297, 1995.
[21] P. J. Phillips, H. Wechsler, J. Huang, and P. J. Rauss, “The FERET database and evaluation procedure for face-recognition algorithms,” Image and Vision Computing, 16(5), pp. 295-306, 1998.
[22] A. Martinez and R. Benavente, “The AR face database,” CVC Tech. Report #24, 1998.
指導教授 范國清(Kuo-Chin Fan) 審核日期 2013-7-19
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明