博碩士論文 105522069 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:19 、訪客IP:52.15.110.218
姓名 陳昌暐(Chang-Wei Chen)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 多模態生物特徵神經網路分類器應用於寵物身分識別
(Multi-Modal Biometric Neural Network Classifier for Pet Identity Authentication)
相關論文
★ 整合GRAFCET虛擬機器的智慧型控制器開發平台★ 分散式工業電子看板網路系統設計與實作
★ 設計與實作一個基於雙攝影機視覺系統的雙點觸控螢幕★ 智慧型機器人的嵌入式計算平台
★ 一個即時移動物偵測與追蹤的嵌入式系統★ 一個固態硬碟的多處理器架構與分散式控制演算法
★ 基於立體視覺手勢辨識的人機互動系統★ 整合仿生智慧行為控制的機器人系統晶片設計
★ 嵌入式無線影像感測網路的設計與實作★ 以雙核心處理器為基礎之車牌辨識系統
★ 基於立體視覺的連續三維手勢辨識★ 微型、超低功耗無線感測網路控制器設計與硬體實作
★ 串流影像之即時人臉偵測、追蹤與辨識─嵌入式系統設計★ 一個快速立體視覺系統的嵌入式硬體設計
★ 即時連續影像接合系統設計與實作★ 基於雙核心平台的嵌入式步態辨識系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 目前政府對於寵物和流浪動物管理主要採用的方法是植入RFID晶片,用以識別寵物身分。由於採用侵入式手段,民眾為寵物植入晶片意願不高,導致管理漏洞。本研究針對以非侵入的影像辨識方法進行寵物身分識別,提出一個多模態生物辨識方法。我們先藉由CNN分類出犬種,再擷取寵物鼻紋、身形輪廓及臉部幾何等多模態生物特徵,最後藉由混合神經網路分類器進行犬隻身分識別。最後我們設計了一個多模態混合神經網路(MM-HNN)分類器系統來驗證寵物辨識的性能。實驗比較顯示,單純使用CNN深度學習模型從事寵物身分識別的等錯誤率為23.77%,MM-HNN寵物身分識別的等錯誤率為13.45%,而CNN犬種辨識加上MM-HNN身分識別的等錯誤率為4.65%;在三群同胞犬的實驗中,身分識別的等錯誤率為4.57%;在三種生物特徵模態的辨識實驗中,在紋理模態中辨識率為88.33%,輪廓模態中辨識率為84.83%,臉部模態中辨識率為79.83%,最後混合三種模態而成的多模態辨識率則達到95%,因此證明了多模態混合神經網路分類器具有良好的辨識性能。
摘要(英) At present, the government mainly uses embedded RFID chip to supervise pets and stray animals to identify the pets. As the means is invasive, common people′ willingness to embed the chip in pets is not high, leading to administrative vulnerability. This study proposes a multi modal biological recognition method for using non-invasive image recognition method to identify pets. The dog species is classified by CNN (Convolutional Neural Network), and then the pet′s multi modal biological features are extracted, such as muzzle pattern, body contour and facial geometry. Finally, the dog is identified by hybrid neural network classifier. A Multi Modal Hybrid Neural Network (MM-HNN) classifier system is designed to validate the performance of pet identification. The empirical comparison shows that the Equal Error Rate (EER) of simply using CNN deep learning model for pet identification is 23.77%, the EER of MM-HNN pet identification is 13.45%, and the EER of CNN dog species recognition + MM-HNN identification is 4.65%. In the experiment on three groups of fellow dog, the EER of identification is 4.57%. In the recognition experiment on three biological feature modals, the texture modal recognition rate is 88.33%, the contour modal recognition rate is 84.83%, the face mode recognition rate is 79.83%, and the multi modal recognition rate of the three modals is 95%, proving that the MM-HNN classifier has good recognition performance.
關鍵字(中) ★ 多模態 關鍵字(英)
論文目次 摘 要 I
Abstract II
誌謝 III
目錄 IV
圖目錄 VII
表目錄 XI
第一章、 緒論 1
1.1 研究背景 1
1.2 研究目標 2
1.3 論文架構 3
第二章、 文獻回顧 4
2.1 紋理特徵抽取 4
2.1.1 灰階共生矩陣 4
2.1.2 灰階梯度共生矩陣 8
2.1.3 局部二值型態 10
2.1.4 Tamura紋理特徵 12
2.2 輪廓特徵抽取 15
2.3 類神經網路 17
2.3.1 機率神經網路 17
2.3.2 自組織映射圖網路 18
2.3.3 多層前饋神經網路 21
2.4 卷積神經網路 24
第三章、 多模態類神經網路分類器 30
3.1 架構概述 30
3.2 CNN犬種辨識 31
3.3 多模態混合神經網路分類器 32
3.4 多模態寵物身分識別 33
第四章、 多模態寵物辨識系統架構設計 39
4.1 MIAT設計方法論 39
4.1.1 IDEF0階層式模組化設計 40
4.1.2 Grafcet 41
4.2 系統架構 43
4.3 離散事件建模 47
4.3.1 影像前處理 48
4.3.2 抽取特徵 49
4.3.3 抽取輪廓特徵 50
4.3.4 抽取臉部特徵 51
4.3.5 抽取紋理特徵 52
4.3.6 類神經網路分類器 54
第五章、 實驗 55
5.1 實驗環境 55
5.1.1 犬種影像資料庫 55
5.1.2 自建實驗資料庫 57
5.2 辨識性能評估實驗 58
5.3 寵物身份識別實驗 60
5.4 同胞犬實驗 64
5.5 訓練樣本數實驗 69
5.6 三種生物特徵模態的辨識實驗 72
第六章、 結論 76
6.1 結論 76
6.2 未來展望 77
參考文獻 78
參考文獻 [1] J. Daugman, "How iris recognition works," in The essential guide to image processing: Elsevier, pp. 715-739, 2009.
[2] J. G. Daugman, "Biometric personal identification system based on iris analysis," ed: Google Patents, 1994.
[3] A. Ross, A. Jain, and J. Reisman, "A hybrid fingerprint matcher," Pattern Recognition, vol. 36, no. 7, pp. 1661-1673, 2003.
[4] R. P. Wildes et al., "Automated, non-invasive iris recognition system and method," ed: Google Patents, 1996.
[5] R.-L. Hsu, M. Abdel-Mottaleb, and A. K. Jain, "Face detection in color images," IEEE transactions on pattern analysis and machine intelligence, vol. 24, no. 5, pp. 696-706, 2002.
[6] L. Muda, M. Begam, and I. Elamvazuthi, "Voice recognition algorithms using mel frequency cepstral coefficient (MFCC) and dynamic time warping (DTW) techniques," arXiv preprint arXiv:1003.4083, 2010.
[7] A. S. Carpentier, C. Jean, M. Barret, A. Chassagneux, and S. Ciccione, "Stability of facial scale patterns on green sea turtles Chelonia mydas over time: A validation for the use of a photo-identification method," Journal of Experimental Marine Biology and Ecology, vol. 476, pp. 15-21, 2016.
[8] S. J. Carter, I. P. Bell, J. J. Miller, and P. P. Gash, "Automated marine turtle photograph identification using artificial neural networks, with application to green turtles," Journal of experimental marine biology and ecology, vol. 452, pp. 105-110, 2014.
[9] A. Marshall and S. Pierce, "The use and abuse of photographic identification in sharks and rays," Journal of fish biology, vol. 80, no. 5, pp. 1361-1379, 2012.
[10] C. W. Speed, M. G. Meekan, and C. J. Bradshaw, "Spot the match–wildlife photo-identification using information theory," Frontiers in zoology, vol. 4, no. 1, p. 2, 2007.
[11] R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman, "Learning object categories from internet image searches," Proceedings of the IEEE, vol. 98, no. 8, pp. 1453-1466, 2010.
[12] 周宛儀, "基於紋理分析的狗鼻孔影像切割," 中興大學資訊科學與工程學系學位論文, pp. 1-68, 2017.
[13] C. E. DeCamp, "Kinetic and kinematic gait analysis and the assessment of lameness in the dog," Veterinary Clinics: Small Animal Practice, vol. 27, no. 4, pp. 825-840, 1997.
[14] K. Delac and M. Grgic, "A survey of biometric recognition methods," in Electronics in Marine, 2004. Proceedings Elmar 2004. 46th International Symposium, pp. 184-193: IEEE, 2004.
[15] P. Sanjekar and J. Patil, "An overview of multimodal biometrics," Signal & Image Processing, vol. 4, no. 1, p. 57, 2013.
[16] M. Gudavalli, S. V. Raju, A. V. Babu, and D. S. Kumar, "Multimodal Biometrics--Sources, Architecture and Fusion Techniques: An Overview," in Biometrics and Security Technologies (ISBAST), 2012 International Symposium on, pp. 27-34: IEEE, 2012.
[17] D. Silver et al., "Mastering the game of Go with deep neural networks and tree search," nature, vol. 529, no. 7587, pp. 484-489, 2016.
[18] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[19] S. Ji, W. Xu, M. Yang, and K. Yu, "3D convolutional neural networks for human action recognition," IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 1, pp. 221-231, 2013.
[20] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, "Large-scale video classification with convolutional neural networks," in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1725-1732, 2014.
[21] O. Abdel-Hamid, A.-r. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, "Convolutional neural networks for speech recognition," IEEE/ACM Transactions on audio, speech, and language processing, vol. 22, no. 10, pp. 1533-1545, 2014.
[22] S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, "Face recognition: A convolutional neural-network approach," IEEE transactions on neural networks, vol. 8, no. 1, pp. 98-113, 1997.
[23] J. K. Hawkins, "Textural properties for pattern recognition," Picture processing and psychopictorics, pp. 347-370, 1970.
[24] R. M. Haralick and K. Shanmugam, "Textural features for image classification," IEEE Transactions on systems, man, and cybernetics, no. 6, pp. 610-621, 1973.
[25] T. Ojala, M. Pietikainen, and T. Maenpaa, "Multiresolution gray-scale and rotation invariant texture classification with local binary patterns," IEEE Transactions on pattern analysis and machine intelligence, vol. 24, no. 7, pp. 971-987, 2002.
[26] H. Tamura, S. Mori, and T. Yamawaki, "Textural features corresponding to visual perception," IEEE Transactions on Systems, man, and cybernetics, vol. 8, no. 6, pp. 460-473, 1978.
[27] D. Zhang and G. Lu, "Shape-based image retrieval using generic Fourier descriptor," Signal Processing: Image Communication, vol. 17, no. 10, pp. 825-848, 2002.
[28] D. F. Specht, "Probabilistic neural networks," Neural networks, vol. 3, no. 1, pp. 109-118, 1990.
[29] A. Alemi, "Improving inception and image classification in tensorflow," G oogle R esearch B log (https://research. googleblog. com/2016/08/improving-inception-and-image. html), 2016.
[30] A. Khosla, N. Jayadevaprakash, B. Yao, and F.-F. Li, "Novel dataset for fine-grained image categorization: Stanford dogs," in Proc. CVPR Workshop on Fine-Grained Visual Categorization (FGVC), vol. 2, p. 1, 2011.
[31] C.-H. Chen, M.-Y. Lin, and X.-C. Guo, "High-level modeling and synthesis of smart sensor networks for Industrial Internet of Things," Computers & Electrical Engineering, vol. 61, pp. 48-66, 2017.
指導教授 陳慶瀚 審核日期 2018-7-2
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明