博碩士論文 108522111 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:50 、訪客IP:18.116.20.55
姓名 李書宇(SHU-YU LI)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 結合自聚類的寵物狗身分識別
(Pet identity recognition combined with auto-clustering)
相關論文
★ 整合GRAFCET虛擬機器的智慧型控制器開發平台★ 分散式工業電子看板網路系統設計與實作
★ 設計與實作一個基於雙攝影機視覺系統的雙點觸控螢幕★ 智慧型機器人的嵌入式計算平台
★ 一個即時移動物偵測與追蹤的嵌入式系統★ 一個固態硬碟的多處理器架構與分散式控制演算法
★ 基於立體視覺手勢辨識的人機互動系統★ 整合仿生智慧行為控制的機器人系統晶片設計
★ 嵌入式無線影像感測網路的設計與實作★ 以雙核心處理器為基礎之車牌辨識系統
★ 基於立體視覺的連續三維手勢辨識★ 微型、超低功耗無線感測網路控制器設計與硬體實作
★ 串流影像之即時人臉偵測、追蹤與辨識─嵌入式系統設計★ 一個快速立體視覺系統的嵌入式硬體設計
★ 即時連續影像接合系統設計與實作★ 基於雙核心平台的嵌入式步態辨識系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2026-8-3以後開放)
摘要(中) 本論文提出使用兩階段的辨識方法應用在寵物狗的身分識別,針對輸入影像以其外在的生物特徵進行初步的分群,將外在特徵相似的影像劃分到同一群集,並以臉部定位對影像進行校正,以消除拍攝照片時因外在因素所產生之誤差並將影像以狗臉的邊框進行正規化,再以LBP進行特徵轉換獲得紋理的特徵圖,並以孿生網路的架構對正規化後的影像進行影像比對,以歐式距離計算輸入影像與系統資料庫已註冊影像相似度,我們以農委會提供的犬隻資料集進行實驗,可達87%的辨識率。
摘要(英) In this paper, a two-stage identification method is proposed for pet dog identification. The input images are initially grouped by their external biometric features, and the images with similar external features are classified into the same group, and the images are corrected by face feature localization to eliminate the errors caused by external factors when taking photos.The similarity between the input images and the registered images in the system database was calculated by using the Euclidean distance, and we were able to achieve 87% recognition rate by using the data set of dogs provided by the Council of Agriculture Executive Yuan.
關鍵字(中) ★ 身分識別
★ 狗
關鍵字(英)
論文目次 摘要 I
Abstract II
誌謝 III
目錄 IV
圖目錄 VII
表目錄 X
第一章、 緒論 1
1.1 研究背景 1
1.2 研究目標 3
1.3 論文架構 3
第二章、 文獻回顧 4
2.1 影像切割 4
2.1.1 YOLO影像切割 4
2.1.2 U-NET語義分割 6
2.2 影像自聚類 8
2.2.1 使用全卷積Auto-encoder的影像自聚類 8
2.2.2 Resnet 殘差神經網路 10
2.3 影像特徵轉換 12
2.3.1 局部二值模式 12
2.4 影像定位 19
2.4.1 MTCNN 19
2.5 影像特徵點辨識 23
2.5.1 Facenet 24
第三章、 寵物狗身分識別系統設計 29
3.1 寵物狗身分識別系統架構設計 30
3.2 寵物狗身分識別系統離散事件建模 36
3.2.1 影像自聚類Grafcet 39
3.2.2 臉部偵測/定位Grafcet 41
3.2.3 特徵轉換Grafcet 42
3.2.4 影像辨識Grafcet 43
3.2.5 註冊/比對Grafcet 43
第四章、 系統整合與實驗 45
4.1 實驗環境與資料庫 45
4.2 寵物狗偵測 48
4.3 自聚類實驗 52
4.4 影像定位實驗 56
4.5 影像特徵轉換實驗 61
4.6 影像辨識實驗 62
4.7 孿生網路比較實驗 66
4.8 討論 68
4.9 未來展望 69
參考文獻 70
參考文獻 [1] D. H. Jang, K. S. Kwon, J. K. Kim, K. Y. Yang, and J. B. Jim, "Dog Identification Method Based on Muzzle Pattern Image," Applied Science, vol. 10, no. 24, pp. 8994, 2020.
[2] H. S. Kühl and T. Burghardt, “Animal biometrics: quantifying and detecting phenotypic appearance,” Trends in Ecology and Evolution, vol. 28, no. 7, pp. 432-441, 2013.
[3] T. Ojala, M. Pietikainen, and D. Harwood, “Performance evaluation of texture measures with classification based on kullback discrimination of distributions,” in Proceedings of 12th International Conference on Pattern Recognition, vol. 1, pp. 582-585, 1994.
[4] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346-359, 2008.
[5] S. Kumar and S. K. Singh, “Monitoring of pet animal in smart cities using animal biometrics,” Future Generation Computer Systems, vol. 83, pp. 553-563, 2018.
[6] S. Kumar and S. K. Singh, “Biometric recognition for pet animal,” Journal of Software Engineering and Applications, vol. 2014, 2014.
[7] J. Ouyang, H. He, Y. He, and H. Tang, “Dog recognition in public places based on convolutional neural network,” International Journal of Distributed Sensor Networks, vol. 15, no. 5, pp. 1550147719829675, 2019.
[8] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. S. Torr, and T. M. Hospedales, “Learning to compare: Relation network for few-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1199-1208, 2018.
[9] O. Vinyals, C. Blundell, T. Lillicrap, and D. Wierstra, “Matching networks for one shot learning,” Advances in Neural Information Processing Systems, vol. 29, pp. 3630-3638, 2016.
[10] F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815-823, 2015.
[11] YOON, Bohan; SO, Hyeonji; RHEE, Jongtae. A Methodology for Utilizing Vector Space to Improve the Performance of a Dog Face Identification Model. Applied Sciences, 2021
[12] 李庆,曾凯, 赵宇 ,李广 ,陈旸 ,陈旸, "一种狗鼻纹特征点的检测方法、装置、系统及存储介质," 2018.
[13] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779-788, 2016.
[14] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016.
[15] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, ”Joint face detection and alignment using multitask cascadedconvolutional networks,“ IEEE Signal Processing Letters, vol. 23, no. 10, pp. 1499-1503, 2016.
[16] D. H. Jang, K. S. Kwon, J. K. Kim, K. Y. Yang, and J. B. Jim, "Dog Identification Method Based on Muzzle Pattern Image," Applied Science, vol. 10, no. 24, pp. 8994, 2020.
[17] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Proceedings of Medical Image Computing and Computer-Assisted Intervention , pp. 234-241, 2015, Springer International Publishing.
[18] L. C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrousconvolution for semantic image segmentation,” CoRR, abs/1706.05587, 2017.
[19] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016.
[20] P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. Cottrell, “Understanding Convolution for Semantic Segmentation,” in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1451-1460, 2018.
[21] AHMED, Nasir. Recent review on image clustering. IET Image Processing, 2015, 9.11: 1020-1032.
[22] YU, Jun, et al. Image clustering based on sparse patch alignment framework. Pattern Recognition, 2014, 47.11: 3512-3519.
JAIN, Anil K.; MURTY, M. Narasimha; FLYNN, Patrick J. Data clustering: a review. ACM computing surveys (CSUR), 1999, 31.3: 264-323
[23] R.David, “Grafcet: A powerful tool for specification of logic controllers ,” IEEE Transactions on Control Systems Technology, vol. 3, no. 3, pp. 253-268, 1995.
[24] J. MacQueen, “Some methods for classification and analysis of multivariate observations,” in Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, no. 14, pp. 281-297, 1967.
[25] XIE, Junyuan; GIRSHICK, Ross; FARHADI, Ali. Unsupervised deep embedding for clustering analysis. In: International conference on machine learning. PMLR, 2016. p. 478-487.
[26] LI, Fengfu; QIAO, Hong; ZHANG, Bo. Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition, 2018, 83: 161-173.
[27] A. Krizhevsky, I. Sutskever, and G. Hinton. “Imagenet classificationwith deep convolutional neural networks,” Advances in Neural Information Processing Systems, vol. 25, pp. 1097-1105, 2012.
[28] Y. Bengio, P. Simard, and P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 157-166, 1994.
[29] X. Glorot and Y. Bengio, “Understanding the difficulty of trainingdeep feedforward neural networks,” in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249-256, 2010.
[30] S. Hochreiter, “Untersuchungen zu dynamischen neuronalen netzen. diploma thesis,” Technical University of Munich, 1991.
[31] Y. LeCun, L. Bottou, G. B. Orr, and K.-R. M ̈uller. Efficient backprop.InNeural Networks: Tricks of the Trade, pages 9–50. Springer, 1998.
[32] A. M. Saxe, J. L. McClelland, and S. Ganguli, “Exact solutions tothe nonlinear dynamics of learning in deep linear neural networks,” arXiv preprint arXiv:1312.6120, 2013.
[33] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers:Surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 1026-1034, 2015.
[34] K. He and J. Sun, “Convolutional neural networks at constrained timecost,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5353-5360, 2015.
[35] R. K. Srivastava, K. Greff, and J. Schmidhuber, “Training very deep networks,” ArXiv Preprint ArXiv:1507.06228, 2015.
[36] T. Ojala, M. Pietikainen, and T. Maenpaa, "Multiresolution gray-scale and rotation invariant texture classification with local binary patterns," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, 2002.
[37] S. Ma and L. Bai, “A face detection algorithm based on Adaboost andnew Haar-Like feature,” in Proceedings of the IEEE International Conference on Software Engineering and Service Science (ICSESS), pp. 651-654, 2016.
[38] C. Lu and X. Tang, “Surpassing human-level face verification performance on LFW with gaussian face,” in Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, pp.3811–3819, 2015.
[39] Y. Sun, X. Wang, and X. Tang, “Deep learning face representation by joint identification-verification,” in Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, pp. 1988–1996, 2014.
[40] Y. Sun, X. Wang, and X. Tang, “Deeply learned face representations are sparse, selective, and robust,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2892-2900, 2015.
[41] V. Jain and E. G. Learned-Miller, “FDDB: A benchmark for face detection in unconstrained settings,” Univ. Massachusetts, Amherst, MA, USA, Tech. Rep. UMCS-2010-009, 2010.
[42] S. Yang, P. Luo, C. C. Loy, and X. Tang, “WIDER FACE: A Face detection benchmark,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5525-553, 2016.
[43] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “Deepface: Closing the gap to human-level performance in face verification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701-1708, 2014.
[44] Y. Sun, X. Wang, and X. Tang, ”Deep learning face representation from predicting 10,000 classes,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1891-1898, 2014.
[45] A. Presley and D. H. Liles , “The use of IDEF0 for the design and specification of methodologies,” in Proceedings of the 4th Industrial Engineering Research Conference, 1995.
[46] R.David, “Grafcet: A powerful tool for specification of logic controllers ,” IEEE Transactions on Control Systems Technology, vol. 3, no. 3, pp. 253-268, 1995.
指導教授 陳慶瀚 審核日期 2021-8-4
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明