摘要(英) |
The biggest dilemma faced by dairy farming in Taiwan currently is the lack of labor, especially the problem of cattle management. The current promotion of intelligent farm management aims at accurate management with detection technology. At present, biometric identification mostly uses iris, muzzle, ear tag, etc. as identification targets. However, the above targets all require extremely high image quality, which is difficult in practice. In addition to the maintenance cost, this study intends to use the original scars and facial features of cattle as the identification target.
In this study, 25 cows were used as experimental samples, and a total of 121 images wereused for training and testing. First, YOLOv4 is used to detect, capture and segment feature images of faces and scars, obtain image feature vectors through Triplet network, calculate the Euclidean distance between samples, compare the similarity, and finally get the cattle identification result. The experimental results show that using a single feature as the recognition condition, the recognition rate of cow face is 92%, and the recognition rate of spots is 88%.
Finally, through the method of hybrid neural network, a dual-modal hybrid neural network is designed, and the probability of cattle samples is calculated through the PNN probability neural network for identification, and the recognition rate can reach 96%. Therefore, it is proved that the dual-modal hybrid neural network can effectively improve the biometric identification performance under the few-sample experimental situation. |
參考文獻 |
[1]. 陳昱宏,臺灣中小型酪農業的經營挑戰與因應策略,中興大學碩士論文,2020。
[2]. 施石軒,卓越雜誌(https://www.cmoney.tw/notes/note-detail.aspx?nid=57200),2016。
[3]. Thi Thi Z., Mamber. Image Technology based cow identification system using deep learning.2018
[4]. Lu, Y., He, X., Wen, Y. and Wang, P.S., “A new cow identification system based on iris analysis and recognition”, In International Journal of Biometrics, vol.6, no1, pp.18-32, 2014.
[5]. Awad, A. I., Zawbaa, H.M., Mahmoud, H.A., Nabi, E.H.H.A., Fayed, R.H. and Hassanien, A.E., “A robust cow identification scheme using muzzle print images”, In 2013 Federated IEEE Conference on Computer Science and Information Systems (FedCSIS), pp.529-534, September, 2013.”
[6]. Ilestrand, M., “Automatic eartag recognition on dairy cows in real barn environment”, 2017.
[7]. W. Andrew, C. Greatwood, and T. Burghardt, "Visual localisation and individual identification of holstein friesian cattle via deep learning," in Proceedings of the IEEE International Conference on Computer Vision Workshops, 2017, pp. 2850-2859.
[8]. Hoffer, Elad, and Nir Ailon. "Deep metric learning using triplet network." International workshop on similarity-based pattern recognition. Springer, Cham, 2015.
[9]. X. Dong, J. Shen, D. Wu, K. Guo, X. Jin and F. Porikli, "Quadruplet network with one-shot learning for fast visual object tracking," in IEEE Transactions on Image Processing, vol. 28, no. 7, pp. 3516-3527, July 2019.
[10]. 180年農機老店,要用AI跟雜草開戰。2022年9月9日,取自https://ai-blog.flow.tw/ai-in-agriculture。
[11]. Linda G. Shapiro and George C. Stockman (2001):「Computer Vision」, pp 279-325, New Jersey, Prentice-Hall, ISBN 0-13-030796-3
[12]. A. Bochkovskiy, C. Wang, H. Mark Liao, "YOLOv4: optimal speed and accuracy of object detection," arXiv preprint arXiv:.10934, 2004
[13]. Bochkovskiy, A., Wang, C.-Y., Liao, H.-Y. M. J. a. p. a. (2020). YOLOv4: Optimal Speed and Accuracy of Object Detection.
[14]. YOLO—You Only Look Once 介紹。2022年9月9日,取自https://medium.com/@c.c.lo/yolo-%E4%BB%8B%E7%B4%B9-4307e79524fe。
[15]. 目標檢測網路之YOLOv2。2022年9月9日,取自http://www.mamicode.com/info-detail-2232517.html。
[16]. Redmon, J., & Farhadi, A. (2017). YOLO9000: Better, Faster, Stronger. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 6517-6525.
[17]. P. Garg, D. R. Chowdhury and V. N. More, "Traffic Sign Recognition and Classification Using YOLOv2, Faster RCNN and SSD,"(2019) 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT), 2019, pp. 1-5, doi: 10.1109/ICCCNT45670.2019.8944491.
[18]. J. Redmon and A. Farhadi, "YOLOv3: an incremental improvement," arXiv preprint arXiv:1804.02767, 2018.
[19]. Yolo:基於深度學習的物件偵測 (含YoloV3) 。2022年9月9日,取自https://mropengate.blogspot.com/2018/06/yolo-yolov3.html。
[20]. Huang, C.-Y.; Lin, I.-C.; Liu, Y.-L. Applying Deep Learning to Construct a Defect Detection System for Ceramic Substrates. Appl. Sci. 2022, 12, 2269. https://doi.org/10.3390/app12052269
[21]. Z. Zhang and V. Saligrama. Zero-shot learning via joint semantic similarity embedding. In CVPR, 2016.
[22]. Garcia, V., and Bruna, J. Few-shot learning with graph neural networks. Proceedings of the International Conference on Learning Representations (2018).
[23]. 施筱萱(2014)。透過圖像分割使用多個方法對人臉進行邊緣檢測。國立交通大學碩士論文。
[24]. 180年農機老店,要用 AI 跟雜草開戰。2022年9月9日,取自https://ai-blog.flow.tw/ai-in-agriculture
[25]. Awad, A. I., Zawbaa, H.M., Mahmoud, H.A., Nabi, E.H.H.A., Fayed, R.H. and Hassanien, A.E., “A robust cow identification scheme using muzzle print images”, In 2013 Federated IEEE Conference on Computer Science and Information Systems (FedCSIS), pp.529-534, September, 2013.”
[26]. 深度學習入門教程:常見的損失函數大全。2022年9月9日,取自https://kknews.cc/code/9opor8j.html。
[27]. 人臉辨識模型Google Facenet介紹與使用。2022年9月9日,取自https://chtseng.wordpress.com/2018/12/09/%E7%95%B6%E7%B4%85%E7%9A%84%E4%BA%BA%E8%87%89%E8%BE%A8%E8%AD%98%E6%A8%A1%E5%9E%8B-google-facenet-%E4%BB%8B%E7%B4%B9%E8%88%87%E4%BD%BF%E7%94%A8/。
[28]. Triplet Loss原理與應用。2022年9月9日,取自https://www.gushiciku.cn/pl/p7KY/zh-hk
[29]. Mingxing Tan & Quoc V. Le EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks, 2019 |