博碩士論文 109521063 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:34 、訪客IP:3.145.154.219
姓名 蔡沂倢(Yi-Chieh Cai)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 基於AI技術之麵包蔬果辨識計價電子秤
相關論文
★ 直接甲醇燃料電池混合供電系統之控制研究★ 利用折射率檢測法在水耕植物之水質檢測研究
★ DSP主控之模型車自動導控系統★ 旋轉式倒單擺動作控制之再設計
★ 高速公路上下匝道燈號之模糊控制決策★ 模糊集合之模糊度探討
★ 雙質量彈簧連結系統運動控制性能之再改良★ 桌上曲棍球之影像視覺系統
★ 桌上曲棍球之機器人攻防控制★ 模型直昇機姿態控制
★ 模糊控制系統的穩定性分析及設計★ 門禁監控即時辨識系統
★ 桌上曲棍球:人與機械手對打★ 麻將牌辨識系統
★ 相關誤差神經網路之應用於輻射量測植被和土壤含水量★ 三節式機器人之站立控制
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 大賣場部分商品,如蔬菜、水果和麵包,未貼結帳條碼,必須經過過磅程序取得結帳條碼才可前往結帳,有時客人太多,店員無法及時協助過磅,浪費不少時間。因此本論文旨在建立一套基於AI技術之麵包蔬果計價系統,顧客只要將購買的商品放置電子秤上,系統會自動辨識磅秤上的物品為蔬果類還是麵包類,然後依照單價重量計價,或是依照個數計價,再依照結帳網站操作,直到所有購買物品過磅完畢,並列印購物清單。如此可以大幅減少賣場人力需求,以及節省顧客等待人工過磅所花費的時間。
文獻[1]可辨識40種蔬果,並且蔬果可放置於彩色塑膠袋,本論文以此為基礎進行功能擴充,新增麵包與綠色葉菜的辨識與計價。考慮到辨識種類增加,本論文將辨識種類分為三大類:蔬果類、葉菜類與麵包類。本論文目標辨識的葉菜類為綠色葉菜,因此使用HSV色彩空間搭配辨識網路模型將葉菜類初步分隔出,再使用YOLOv5m6網路架構偵測與初步分辨蔬果類與麵包類,若為蔬果類,依照論文[1]的方法處理。若為葉菜類或麵包類,使用EfficientNet-B6網路架構分別辨識10種葉菜或25種麵包。考慮到購買麵包數量較多時,餐盤上的麵包可能會緊密擺放或堆疊,本論文設計一個提昇麵包準確率的遮蔽機制,降低麵包堆疊的影響,提升麵包辨識準確率。此系統總共可辨識40種蔬果、10種葉菜與25種麵包,使用在白色光源下所蒐集之測試資料進行測試,蔬果辨識準確率達到96.9%[1],葉菜辨識準確率達到98.6%,麵包辨識準確率達到99.5%。本論文改良結帳網站[1],新增麵包過磅介面,並且介面有詳細說明文字引導使用者完成過磅流程。

[1]董吉峰:〈基於AI技術之蔬果辨識計價電子秤〉。碩士論文,國立中央大學,民國110年6月。
摘要(英) Sometimes there are too many customers, and manual weighing will inevitably encounter more customers. The clerk could not assist in weighing in time and waste a lot of time for the customer. Therefore, this thesis aims to establish a pricing system based on AI technology for recognizing vegetables, fruits, and breads. When the customer puts the purchased items on the electronic scale, the system will automatically recognize whether the items are vegetables, fruits, or pieces of bread. Then total price calculates according to the weight or the quantity. Customers follow the checkout website until all purchases are weight, then a shopping list is printed. This can greatly reduce the manpower requirements of the store and save the time customers waiting for one-to-one staff assistance.
Literature [1] recognized 40 kinds of fruits and vegetables placed in colorful plastic bags. This thesis improves on this basis by adding bread and green leafy vegetables. Considering the increase in recognizing items, this thesis divides the items into three categories: fruits and vegetables, leafy vegetables and breads. The YOLOv5m6 network architecture is used to detect and preliminarily recognize the vegetables, fruits, and breads, and calculate the number of items at the same time. The EfficientNet-B6 network architecture was used to recognize 10 kinds of leafy vegetables and 25 kinds of bread, respectively, and weighted majority voting was used for fruit and vegetable recognition [1]. Considering that when a large number of breads are purchased, the breads on the plate may be closely placed or stacked. This thesis designs a masking mechanism to improve the accuracy of bread recognition. This mechanism can reduce the influence of bread stacking, and improve the accuracy of bread recognition. The system can recognize a total of 40 kinds of vegetables and fruits, 10 kinds of leafy vegetables and 25 kinds of bread. All the testing data are taken under the white light. The accuracy rate of vegetable and fruit recognition is 96.9% [1], the accuracy of leafy vegetables recognition is 98.6%, and the accuracy of bread recognition is 99.5%. This thesis improves the checkout website [1]. A new bread weighing interface will be added to the webpage, and the description text will be added to the interface to guide users through the weighing process.
關鍵字(中) ★ 深度學習
★ 目標偵測
★ 分類辨識
★ 蔬果辨識
★ 麵包辨識
關鍵字(英) ★ deep learning
★ object detection
★ image classification
★ vegetable recognition
★ fruit recognition
★ bread recognition
論文目次 摘要 i
Abstract ii
致謝 iii
目錄 iv
圖目錄 vi
表目錄 viii
第一章 緒論 1
1.1 研究背景與動機 1
1.2 文獻回顧 1
1.3 論文目標 5
1.4 論文架構 6
第二章 系統架構與硬體介紹 7
2.1 系統架構與流程圖 7
2.2 硬體介紹 8
2.2.1 桌上型電腦 8
2.2.2 顯示卡 8
2.2.3 微型單板電腦 9
2.2.4 藍芽電子秤 10
2.2.5 網路攝影機 10
2.2.6 熱敏打印機 10
2.3 軟體環境介紹 11
第三章 主要網路架構及演算法 12
3.1 整體演算流程 12
3.2 影像預處理 13
3.3 色彩空間轉換 15
3.4 葉菜群辨識 17
3.4.1 訓練資料 17
3.4.2 資料擴增 18
3.5 葉菜分類 18
3.5.1 訓練資料 19
3.6 麵包與蔬果群偵測 19
3.6.1 訓練資料 20
3.6.2 資料擴增 20
3.7 蔬果分類 21
3.8 遮蔽機制 22
3.9 麵包分類 24
3.9.1 訓練資料 24
第四章 結帳網站使用說明 26
第五章 實驗結果 30
5.1 葉菜辨識結果 30
5.2 葉菜分類網路辨識結果 30
5.3 麵包蔬果目標偵測網路結果 32
5.4 蔬果分類 33
5.5 遮蔽機制 34
5.6 麵包分類網路辨識結果 34
5.7 整合實驗結果 36
第六章 結論與未來展望 40
6.1 結論 40
6.2 未來展望 40
參考文獻 42
參考文獻 [1]董吉峰:〈基於AI技術之蔬果辨識計價電子秤〉。碩士論文,國立中央大學,民國110年6月。
[2] F. Femling, A. Olsson and F. Alonso-Fernandez, “Fruit and vegetable identification using machine learning for retail applications,” in 14th International Conference on Signal-Image Technology & Internet-Based Systems, 2018, pp. 9-15.
[3]“Bakeryscan -the pastry AI cooperating with cashier,” BakeryScan. [Online]. Available: https://bakeryscan.com/bakeryscan-eng/. [Accessed: 28-Jul-2022].
[4] M. Morimoto and A. Higasa, “A bread recognition system using RGB-D sensor,” in International Conference on Informatics, Electronics & Vision, 2015, pp. 1-4.
[5] D. Pishva, K. Hirakawa, A. Kawai, and T. Shiino, “A unified image segmentation approach with application to bread recognition,” in 5th International Conference on Signal Processing Proceedings, Volume 2, 2000, pp. 840-844.
[6] G. Z. Jian and C. M. Wang, “The bread recognition system with logistic regression,” Communications in Computer and Information Science, Volume 1013, 2019, pp. 150-156.
[7] T. Oka and M. Morimoto, “A recognition method for partially overlapped objects,” World Automation Congress, 2016, pp. 1-4.
[8] M. Yukitoh, T. Oka, and M. Morimoto, “Recognition of overlapped objects using RGB-D sensor,” in 6th International Conference on Informatics, Electronics and Vision & in 7th International Symposium in Computational Medical and Health Technology, 2017, pp. 1-4.
[9] M. Morimoto and M. Yukitou, “A recognition method for overlapped objects using multiple RGB-D sensors,” World Automation Congress, 2018, pp. 1-5.
[10]W. d. S. Cotrim, V. P. R. Minim, L. B. Felix, and L. A. Minim, “Short convolutional neural networks applied to the recognition of the browning stages of bread crust,” Journal of Food Engineering, Volume 277, 2020, Art. no. 109916.
[11] S. H. Lee, C. S. Chan, S. J. Mayo and P. Remagnion, “How deep learning extracts and learns leaf features for plant classification,” Pattern Recognition, Volume 71, 2017, pp. 1-13.
[12] T. T. Dat et al, “Leaf recognition based on joint learning multiloss of multimodel convolutional neural networks: A testing for vietnamese herb,” Computational Intelligence and Neuroscience, 2021, Art. no. 5032359.
[13] A. Beikmohammadi, K. Faez, and A. Motallebi, “SWP-LeafNET: A novel multistage approach for plant leaf identification based on deep CNN,” Expert Systems with Applications, Volume 202, 2022, Art. no. 117470.
[14] S. A. Pearline and V. S. Kumarb, “Performance analysis of real-time plant species recognition using bilateral network combined with machine learning classifier,” Ecological Informatics, Volume 67, 2022, Art. no. 101492.
[15] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Computer Vision and Pattern Recognition, 2016, pp. 779-788.
[16] J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in Computer Vision and Pattern Recognition, 2017, pp. 6517-6525.
[17] J. Redmon and A. Farhadi, “Yolo v3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.
[18] Tsung-Yi Lin et al., “Feature pyramid networks for object detection,” arXiv preprint arXiv:1612.03144v2, 2016.
[19] M. Tan, R. Pang, and Q. V. Le, “EfficientDet: Scalable and efficient object detection,” in Computer Vision and Pattern Recognition, 2020, pp. 10778-10787.
[20] A. Bochkovskiy, C.-Y. Wang and H.-Y. M. Liao, “YOLOv4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
[21] C. Y. Wang et al., “CSPNet: A new backbone that can enhance learning capability of CNN,” Computer Vision and Pattern Recognition Workshop, 2020.
[22] G. S. Jocher et al., “ultralytics/yolov5,” 2022.
[23] C. Szegedy et al., “Going deeper with convolutions,” arXiv preprint arXiv:1409.4842, 2014.
[24] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” arXiv preprint arXiv:1512.03385, 2015.
[25] G. Huang, Z. Liu, L. V. D. Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” Computer Vision and Pattern Recognition, 2017, pp. 2261-2269.
[26] A. G. Howard et al., “MobileNets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861v1, 2017.
[27] M. Tan and Q. V. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” arXiv preprint arXiv:1905.11946v5, 2019.
[28] Y. Zhou, M. Wu, Y. Bai, and C. Guo, “Flame detection with pruned and knowledge distilled YOLOv5,” in 5th Asian Conference on Artificial Intelligence Technology, 2021, pp. 780-785.
[29] S. Liu, L. Qi, H. Qin, J. Shi and J. Jia, “Path aggregation network for instance segmentation,” arXiv preprint arXiv: 1803.01534, 2018.
[30]“Histograms - 2: Histogram equalization,” OpenCV. [Online]. Available: https://docs.opencv.org/3.1.0/d5/daf/tutorial_py_histogram_equalization.html. [Accessed: 28-Jul-2022].
[31] “Histograms - 2: Histogram equalization,” OpenCV. [Online]. Available: https://docs.opencv.org/4.x/d5/daf/tutorial_py_histogram_equalization.html. [Accessed: 15-Aug-2022].
指導教授 王文俊(Wen-June Wang) 審核日期 2022-9-13
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明