博碩士論文 107522113 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:118 、訪客IP:3.145.56.59
姓名 羅文圻(Wen-Chi Lo)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 牧場乳牛偵測與身分識別
(Detection and Identification for Dairy Cows in Pasture)
相關論文
★ 整合GRAFCET虛擬機器的智慧型控制器開發平台★ 分散式工業電子看板網路系統設計與實作
★ 設計與實作一個基於雙攝影機視覺系統的雙點觸控螢幕★ 智慧型機器人的嵌入式計算平台
★ 一個即時移動物偵測與追蹤的嵌入式系統★ 一個固態硬碟的多處理器架構與分散式控制演算法
★ 基於立體視覺手勢辨識的人機互動系統★ 整合仿生智慧行為控制的機器人系統晶片設計
★ 嵌入式無線影像感測網路的設計與實作★ 以雙核心處理器為基礎之車牌辨識系統
★ 基於立體視覺的連續三維手勢辨識★ 微型、超低功耗無線感測網路控制器設計與硬體實作
★ 串流影像之即時人臉偵測、追蹤與辨識─嵌入式系統設計★ 一個快速立體視覺系統的嵌入式硬體設計
★ 即時連續影像接合系統設計與實作★ 基於雙核心平台的嵌入式步態辨識系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2025-7-20以後開放)
摘要(中) 乳牛是一種高經濟動物,而其生長發育或發情配種等皆與身分識別有直接關係,因此本研究提出了一牧場乳牛身分識別系統,同時滿足用遠距攝影機拍攝的影像做辨識以及可隨時增刪類別這兩個需求。利用物件偵測找出影像中的乳牛,再用影像對比增強強化特徵並萃取出Auto Encoder 和水平垂直投影三種模態特徵,並為每隻牛分配其專屬的的分類器,新增或減少類別只需增刪分類器即可。每個分類器會分別對三種特徵做量化再由機率神經網路做特徵融合,最後將所有分類器的輸出作整合並推論。實驗結果顯示我們的系統在少量訓練樣本數(10、5)時得到93.5%和88.6%的辨識率,優於ResNet50的86.5%和70.0%,在15 張訓練樣本時則以94.0%略輸給ResNet50 的96.0%。此外在10張訓練樣本時每新增一個類別我們的系統平均降低0.9%的辨識率而ResNet50 則是1.23%。這顯示我們的系統在少量訓練樣本的優勢,以及在新增類別時不僅不須重新訓練,對辨識率的影響也較小,能應對需隨時增刪類別之應用。
摘要(英) Dairy cows are high-economic animals, and their growth and breeding are directly related to identification. Therefore we proposes an identification system for dairy cows in pasture, which can meet the needs of identify by photos taken by remote camera and can add or delete categories easily. Using object detection to find cows in the image, then use image contrast enhancement algorithm to enhance the image and extract the three modal features , including Auto Encoder, horizontal and vertical projection, and assign each cow its own classifier. Adding or reducing categories only needs to add or delete the classifier. Each classifier will extract the three model features separately, and then use the probability neural network to fusion these features, and finally the output of all classifiers are integrate and infer. The experimental results show that our system obtains 93.5% and 88.6% accuracy in fewer training data (10, 5), which
is better than ResNet50′s 86.5% and 70.0%. In the case of 15 training data, we got 94% accuracy slightly lost to 96.0% of ResNet50. In addition, adding a new category of 10 training samples, our system reduces accuracy by 0.9% on average and ResNet50 is 1.23%. This shows that our
system has the advantage of fewer of training data, and not only doesn’t need to retrain the whole network when adding new categories, but also has less impact on accuracy. Shows that it can cope with applications that need to add or delete categories very often.
關鍵字(中) ★ 乳牛辨識
★ 乳牛偵測
關鍵字(英) ★ Cow Identification
★ Cow Detection
論文目次 目錄
摘要 ................................ ................................ ................................ ................................ ............. I
Abstract ................................ ................................ ................................ ................................ ...... II
謝誌 ................................ ................................ ................................ ................................ .......... III
目錄 ................................ ................................ ................................ ................................ ........... V
圖目錄 ................................ ................................ ................................ ................................ ... VIII
表目錄 ................................ ................................ ................................ ................................ ....... X
第一章、緒論 ................................ ................................ ................................ ............................ 1
1.1 研究背景 ................................ ................................ ................................ ..................... 1
1.2 研究目的 ................................ ................................ ................................ ..................... 3
1.3 論文架構 ................................ ................................ ................................ ..................... 4
第二章 、文獻回顧 ................................ ................................ ................................ .................... 5
2.1 乳牛切割 ................................ ................................ ................................ ..................... 5
2.2 影像對比增強技術 ................................ ................................ ................................ ..... 8
2.3 乳牛特徵擷取 ................................ ................................ ................................ ........... 11
2.3.1水平垂直投影 ................................ ................................ ................................ . 11
2.3.2 Auto Encoder ................................ ................................ ................................ ... 12
2.4身分識別 ................................ ................................ ................................ .................... 14
2.4.1 自組織映射神經網路 ................................ ................................ .................... 14
2.4.2 機率神經網路 ................................ ................................ ................................ 17
2.4.3 群粒子最佳化演算法 ................................ ................................ .................... 21
第三章、乳牛身分識別系統設計 ................................ ................................ .......................... 24
3.1 乳牛身分識別系統架構 ................................ ................................ ........................... 26
3.1.1 自適應雙門檻值法模組 ................................ ................................ ................ 27
3.1.2 特徵擷取模組 ................................ ................................ ................................ 28
VI
3.1.3 身分識別模組 ................................ ................................ ................................ 29
3.1.4 分散式類器模組 ................................ ................................ ........................ 30
3.1.5 決策融合模組 ................................ ................................ ................................ 31
3.2 乳牛身分識別系統離散事件建模 ................................ ................................ ........... 32
3.2.1 影像對比增強離散事件建模 ................................ ................................ ........ 33
3.2.2 雙門檻值法離散事件建模 ................................ ................................ ............ 34
3.2.3 特徵擷取離散事件建模 ................................ ................................ ................ 35
3.2.4 水平垂直投影特徵擷取離散事件建模 ................................ ........................ 36
3.2.5 身分識別離散事件建模 ................................ ................................ ................ 38
3.2.6 分散式類器離事件建模 ................................ ................................ ........ 38
3.2.7 特徵機率計算離散事件建模 ................................ ................................ ........ 39
3.2.8 特徵融合離散事件 建模 ................................ ................................ ................ 40
3.2.9 決策融合離散事件建模 ................................ ................................ ................ 41
3.3 高階軟體合成 ................................ ................................ ................................ ........... 42
3.3.1 離散事件模型軟體合成 ................................ ................................ ................ 42
3.3.2 軟體合成規則 ................................ ................................ ................................ 43
第四章、實驗 ................................ ................................ ................................ .......................... 47
4.1 實驗環境 ................................ ................................ ................................ ................... 47
4.2 切割實驗 ................................ ................................ ................................ ................... 49
4.2.1 YOLO切割實驗 ................................ ................................ ............................. 49
4.2.2 MaskRCNN切割 ................................ ................................ ............................ 51
4.3 乳牛身分識別實驗 ................................ ................................ ................................ ... 53
4.3.1 訓練策略 ................................ ................................ ................................ ........ 54
4.3.2 訓練樣本數實驗 ................................ ................................ ............................ 56
4.4 比較實驗 ................................ ................................ ................................ ................... 59
4.4.1 典型機器學習方法的比較實驗 ................................ ................................ .... 59
VII
4.4.2 深度學習網路 ResNet比較實驗 ................................ ................................ .. 61
4.5 新增類別實驗 ................................ ................................ ................................ ........... 63
4.5.1 增量學習實驗 ................................ ................................ ................................ 63
4.5.2 深度學習網路 ResNet新增類別數比較實驗 ................................ .............. 66
第五章、結論與未來展望 ................................ ................................ ................................ ...... 68
5.1 結論 ................................ ................................ ................................ ........................... 68
5.2 未來展望 ................................ ................................ ................................ ................... 69
參考文獻 ................................ ................................ ................................ ................................ .. 70
參考文獻 [1] 楊智凱 et al., "智慧農業機具與輔之應用 智慧農業機具與輔之應用 ," 科儀新知 , no. 220, pp. 4-19, 2019.
[2] 徐濟泰 , "乳牛場自動化與智慧的效益分析 乳牛場自動化與智慧的效益分析 乳牛場自動化與智慧的效益分析 ," 台灣農學會報 , vol. 20, no. 1, pp. 39-58, 2019.
[3] R. R. Lowe, "Livestock identification tag," ed: Google Patents, 1976.
[4] D. Lay Jr, T. Friend, C. Bowers, K. Grissom, and O. Jenkins, "A comparative physiological and behavioral study of freeze and hot-iron branding using dairy cows," Journal of animal science, vol. 70, no. 4, pp. 1121-1125, 1992.
[5] S. Stankovski, G. Ostojic, I. Senk, M. Rakic-Skokovic, S. Trivunovic, and D. Kucevic, "Dairy cow monitoring by RFID," Scientia Agricola, vol. 69, no. 1, pp. 75-80, 2012.
[6] A. I. Awad, "From classical methods to animal biometrics: A review on cattle identification and tracking," Computers and Electronics in Agriculture, vol. 123, pp. 423-435, 2016.
[7] T. L. Robertson, O. Colliou, E. J. Snyder, and M. J. Zdeblick, "RFID antenna for in-body device," ed: Google Patents, 2012.
[8] T. Gaber, A. Tharwat, A. E. Hassanien, and V. Snasel, "Biometric cattle identification approach based on weber’s local descriptor and adaboost classifier," Computers and Electronics in Agriculture, vol. 122, pp. 55-66, 2016.
[9] Y. Lu, X. He, Y. Wen, and P. S. Wang, "A new cow identification system based on iris analysis and recognition," International journal of biometrics, vol. 6, no. 1, pp. 18-32, 2014.
[10] S. Kumar, S. Tiwari, and S. K. Singh, "Face recognition for cattle," in 2015 Third International Conference on Image Information Processing (ICIIP), 2015, pp. 65-72: IEEE.
[11] C.-l. Tisse, L. Martin, L. Torres, and M. Robert, "Person identification technique using human iris recognition," in Proc. Vision Interface, 2002, vol. 294, pp. 294-299.
[12] W. Andrew, C. Greatwood, and T. Burghardt, "Visual localisation and individual identification of holstein friesian cattle via deep learning," in Proceedings of the IEEE International Conference on Computer Vision Workshops, 2017, pp. 2850-2859.
[13] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587.
71
[14] T. T. Zin, C. N. Phyo, P. Tin, H. Hama, and I. Kobayashi, "Image technology based cow identification system using deep learning," in Proceedings of the International MultiConference of Engineers and Computer Scientists, 2018, vol. 1, pp. 236-247.
[15] L. Bergamini et al., "Multi-views Embedding for Cattle Re-identification," in 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), 2018, pp. 184-191: IEEE.
[16] D. F. Specht, "Probabilistic neural networks," Neural networks, vol. 3, no. 1, pp. 109-118, 1990.
[17] J. Kennedy and R. Eberhart, "Particle swarm optimization," in Proceedings of ICNN′95-International Conference on Neural Networks, 1995, vol. 4, pp. 1942-1948: IEEE.
[18] T. Kalinke, C. Tzomakas, and W. von Seelen, "A texture-based object detection and an adaptive model-based classification," in Procs. IEEE Intelligent Vehicles Symposium, 1998, vol. 98, pp. 341-346: Citeseer.
[19] R. Girshick, "Fast r-cnn," in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440-1448.
[20] S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," in Advances in neural information processing systems, 2015, pp. 91-99.
[21] J. Redmon and A. Farhadi, "YOLO9000: better, faster, stronger," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263-7271.
[22] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779-788.
[23] J. Redmon and A. Farhadi, "Yolov3: An incremental improvement," arXiv preprint arXiv:.02767, 2018.
[24] W. Liu et al., "Ssd: Single shot multibox detector," in European conference on computer vision, 2016, pp. 21-37: Springer.
[25] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[26] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature pyramid networks for object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2117-2125.
[27] K. Liang, Y. Ma, Y. Xie, B. Zhou, and R. Wang, "A new adaptive contrast enhancement algorithm for infrared images based on double plateaus histogram
72
equalization," Infrared Physics and Technology, vol. 55, no. 4, pp. 309-315, 2012.
[28] Y.-f. Song, X.-p. Shao, and J. Xu, "New enhancement algorithm for infrared image based on double plateaus histogram," Infrared and Laser Engineering
vol. 2, 2008.
[29] C. Szegedy et al., "Going deeper with convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9.
[30] T. J. P. o. t. I. Kohonen, "The self-organizing map," vol. 78, no. 9, pp. 1464-1480, 1990.
[31] C.-H. Chen, M.-Y. Lin, and X.-C. Guo, "High-level modeling and synthesis of smart sensor networks for Industrial Internet of Things," Computers and Electrical Engineering, vol. 61, pp. 48-66, 2017.
[32] T.-Y. Lin et al., "Microsoft coco: Common objects in context," in European conference on computer vision, 2014, pp. 740-755: Springer.
[33] K. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask r-cnn," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961-2969.
指導教授 陳慶瀚(Ching-Han Chen) 審核日期 2020-7-28
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明