博碩士論文 108522023 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:10 、訪客IP:18.118.166.180
姓名 林岱鋒(Tai-Feng Lin)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 混合深度神經網路的市場魚種辨識
(Hybrid Deep Neural Network Classifier for Market Fish Species Recognition)
相關論文
★ 整合GRAFCET虛擬機器的智慧型控制器開發平台★ 分散式工業電子看板網路系統設計與實作
★ 設計與實作一個基於雙攝影機視覺系統的雙點觸控螢幕★ 智慧型機器人的嵌入式計算平台
★ 一個即時移動物偵測與追蹤的嵌入式系統★ 一個固態硬碟的多處理器架構與分散式控制演算法
★ 基於立體視覺手勢辨識的人機互動系統★ 整合仿生智慧行為控制的機器人系統晶片設計
★ 嵌入式無線影像感測網路的設計與實作★ 以雙核心處理器為基礎之車牌辨識系統
★ 基於立體視覺的連續三維手勢辨識★ 微型、超低功耗無線感測網路控制器設計與硬體實作
★ 串流影像之即時人臉偵測、追蹤與辨識─嵌入式系統設計★ 一個快速立體視覺系統的嵌入式硬體設計
★ 即時連續影像接合系統設計與實作★ 基於雙核心平台的嵌入式步態辨識系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2026-8-3以後開放)
摘要(中) 在臺灣傳統魚市場中有成千上萬的魚類品種,普通消費者很難準確辨別魚的種類,但目前的魚類辨識系統大多是針對非食用魚類較多,且魚類種類數量非常稀少,因此本研究針對臺灣魚市場常見之魚類,設計一個多類別數的魚類辨識系統,以提供大眾消費者藉以在魚市場識別、採購魚類產品。首先透過物件偵測找出影像中的魚體,並設計一個混合神經網路分類器,起初透過ResNet50進行分類並輸出Top-5的相似類別,接著進行第二階段模板匹配,利用孿生神經網路模型將已切割的影像與所有相似類別的模板圖像進行比對,輸出影像間的相似值,最後比較各類的相似值,輸出最大相似值類別作為最終決策。實驗結果顯示混合神經網路分類器能達到97.5%的辨識率,優於ResNet50的90.5%和孿生神經網路的89%。
摘要(英) In Taiwan’s traditional fish market, there are thousands of fish species. It is difficult for ordinary consumers to accurately identify the types of fish. However, most of the current fish identification systems are aimed at more non-edible fish, and the number of fish species is very scarce. The study designed a multi-category fish identification system for common fish in Taiwan’s fish market to provide consumers with a way to identify and purchase fish products in the fish market. First, find the fish in the image through object detection, and design a hybrid neural network classifier. At first, it classifies through ResNet50 and outputs similar categories of Top-5, and then performs the second stage of template matching, using the Siamese neural network model compares the cut image with the template images of all similar categories,
outputs the similarity values between the images, and finally compares the similarity values of various types, and outputs the largest similarity value category as the final decision. The experimental results show that the hybrid neural network classifier can achieve a recognition
rate of 97.5%, which is better than the 90.5% of ResNet50 and 89% of the Siamese neural network.
關鍵字(中) ★ 深度學習
★ 物件偵測
★ 影像辨識
關鍵字(英)
論文目次 摘要 i
Abstract ii
謝誌 iii
目錄 iv
圖目錄 vi
表目錄 viii
第一章、緒論 1
1.1 研究背景 1
1.2 研究目標 3
1.3 論文架構 3
第二章、文獻回顧 4
2.1 物件偵測 4
2.2 影像分類 8
2.2.1 LeNet 9
2.2.2 AlexNet 10
2.2.3 VGGNet 11
2.2.4 GoogLeNet 11
2.2.5 深度殘差網路 12
2.3 孿生神經網路 13
第三章、魚種辨識的混合神經網路分類器 15
3.1 混合神經網路分類方法 15
3.2 分類器系統架構 16
3.3 魚類辨識系統離散事件建模 19
3.3.1 物件切割離散事件建模 19
3.3.2 種類辨識離散事件建模 20
3.3.3 孿生神經網路離散事件建模 21
3.4 混合神經網路分類器的高階軟體合成 22
第四章、實驗結果 24
4.1 開發環境 24
4.2 實驗資料集 24
4.2.1 偵測資料集 25
4.2.2 分類資料集 25
4.3 切割實驗 30
4.4 深度殘差網路實驗 33
4.5 孿生神經網路實驗 37
4.6 混合神經網路分類器 39
4.7 系統整合 42
第五章、結論與未來展望 45
5.1 結論 45
5.2 未來展望 46
參考文獻 47
參考文獻 [1]Kwang-Tsao Shao, "The Fish Database of Taiwan," TELDAP, 2014.
[2]P. Patel, A. Thakkar, "The upsurge of deep learning for computer vision applications," Indonesian Journal of Electrical Engineering and Computer Science, vol. 10, no.1, pp. 538-548, 2020.
[3]A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep convolutional neural networks," in Proceedings of the 25th Neural Information Processing Systems, vol. 1, pp. 1097–1105, 2012.
[4]K. He, X. Zhang, S. Ren, J. Sun, "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification," arXiv preprint arXiv:.01852, 2015.
[5]I. Sharmin, N. F. Islam, I. Jahan, T. A. Joye, and M. T. Habib, "Machine vision based local fish recognition," SN Applied Sciences, vol. 1, pp. 1529, 2019.
[6]U. Andayani, A. Wijaya, R. Rahmat, B. Siregar, M. Syahputra, "Fish species classification using probabilistic neural network," Journal of Physics: Conference Series, vol. 1235, no. 1, pp. 012094, 2019.
[7]S. N. M. Rum, F. A. Z. Nawawi, "FishDeTec: A Fish Identification Application using Image Recognition Approach," International Journal of Advanced Computer Science and Applications, vol. 12, no. 3, pp. 102-106, 2021.
[8]A. Dhillon, G. K. Verma, "Convolutional neural network: A review of models, methodologies and applications to object detection," Progress in Artificial Intelligence, vol. 9, no. 2, pp. 85-112, 2020.
[9]M. Yusup, M. Iqbal and I. Jaya, "Real-time reef fishes identification using deep learning," IOP Conference Series: Earth and Environmental Science, vol. 429, pp. 012046, 2020.
[10]Redmon and A. Farhadi, "Yolov3: An incremental improvement," arXiv preprint arXiv:.02767, 2018.
[11]M. Knausgard, A. Wiklund, T. K. Sørdalen, K. T. Halvorsen, A. R. Kleiven, L. Jiao, M. Goodwin, "Temperate fish detection and classification a deep learning based approach," arXiv:07518, 2020.
[12]P. R. Hegde, M. M. Shenoy, and B. H. Shekar, "Comparison of Machine Learning Algorithms for Skin Disease Classification Using Color and Texture Features," International Conference on Advances in Computing, Communications and Informatics, pp. 1825–1828, 2018.
[13]L. Jiao, F. Zhang, F. Liu, S. Yang, L. Li, Z. Feng, R. Qu, "A Survey of Deep Learning-based Object Detection," IEEE Access, vol. 7, pp. 128837-128868, 2019.
[14]W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. Reed, "SSD: Single shot multibox detector," arXiv preprint arXiv:02325, 2015.
[15]J. Redmon and A. Farhadi, "YOLO9000: better, faster, stronger," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263-7271.
[16]J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779-788.
[17]A. Bochkovskiy, C. Y. Wang, and H. Y. M.Liao, "YOLOv4: Optimal Speed and Accuracy of Object Detection," arXiv preprint arXiv:.10934, 2020.
[18]R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proceedings of the 72 IEEE conference on computer vision and pattern recognition, 2014, pp. 580- 587.
[19]R. Girshick, "Fast r-cnn," in Proceedings of IEEE international conference on computer vision, 2015, pp. 1440-1448.
[20]S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," in Advances in neural information processing systems, 2015, pp. 91-99.
[21]K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[22]T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature pyramid networks for object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2117-2125.
[23]T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, "Focal loss for dense object detection," arXiv preprint arXiv:02002, 2017.
[24]F.Y. Osisanwo, J. E. T. Akinsola, O. Awodele, J. O. Hinmikaiye, O. Olakanmi, and J. Akinjobi, "Supervised Machine Learning Algorithms: Classification and Comparison," International Journal of Computer Trends and Technology, vol. 48, no. 3, pp. 128-138, 2017.
[25]A.F. Agarap, "An Architecture Combining Convolutional Neural Network (CNN) and Support Vector Machine (SVM) for Image Classification," arXiv preprint arXiv:03541, 2017.
[26]M. AI-Qatf, Y. Lasheng, M. AI-Habib and K. Al-Sabahi, "Deep learning approach combining sparse autoencoder with SVM for network intrusion detection," IEEE Access, no. 6, pp. 52843-52856, 2018.
[27]Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[28]D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning internal representations by error propagation," California Univ San Diego La Jolla Inst for Cognitive Science, 1985.
[29]A. F. Agarap, "Deep Learning using Rectified Linear Units(ReLU)," arXiv preprint arXiv:08375, 2018.
[30]N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhut-dinov, "Dropout: A simple way to prevent neural networks from overfitting," Journal of Machine Learning Research, vol. 15, pp. 1929-1958, 2014.
[31]L. Perez and J. Wang, "The effectiveness of data augmentation in image classification using deep learning," arXiv preprint arXiv:04621, 2017.
[32]K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:91556, 2014.
[33]C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going deeper with convolutions," arXiv preprint arXiv:4842, 2014.
[34]J. Bromley, I. Guyon, Y. LeCun, E. Sckinger and R. Shah, "Signature Verification using a "Siamese" Time Delay Neural Network," Proceedings of the 7th Annual Neural Information Processing Systems, vol. 7, no. 4, pp. 669-687, 1994.
[35]C.-H. Chen, M.-Y. Lin, and X.-C. Guo, "High-level modeling and synthesis of smart sensor networks for Industrial Internet of Things," Computers & Electrical Engineering, vol. 61, pp. 48-66, 2017.
[36]Ching-Han Chen, Lu-Hsuan Chen, and Chin-Yi Chen, "Automatic Fish Segmentation and Recognition in Taiwan Fish Market Using Deep Learning Techniques," Journal of Imaging Science and Technology, vol. 65, no. 4, pp. 040403-1–040403-10, 2021.
指導教授 陳慶瀚 審核日期 2021-8-4
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明