參考文獻 |
[1]董吉峰:〈基於AI技術之蔬果辨識計價電子秤〉。碩士論文,國立中央大學,民國110年6月。
[2] F. Femling, A. Olsson and F. Alonso-Fernandez, “Fruit and vegetable identification using machine learning for retail applications,” in 14th International Conference on Signal-Image Technology & Internet-Based Systems, 2018, pp. 9-15.
[3]“Bakeryscan -the pastry AI cooperating with cashier,” BakeryScan. [Online]. Available: https://bakeryscan.com/bakeryscan-eng/. [Accessed: 28-Jul-2022].
[4] M. Morimoto and A. Higasa, “A bread recognition system using RGB-D sensor,” in International Conference on Informatics, Electronics & Vision, 2015, pp. 1-4.
[5] D. Pishva, K. Hirakawa, A. Kawai, and T. Shiino, “A unified image segmentation approach with application to bread recognition,” in 5th International Conference on Signal Processing Proceedings, Volume 2, 2000, pp. 840-844.
[6] G. Z. Jian and C. M. Wang, “The bread recognition system with logistic regression,” Communications in Computer and Information Science, Volume 1013, 2019, pp. 150-156.
[7] T. Oka and M. Morimoto, “A recognition method for partially overlapped objects,” World Automation Congress, 2016, pp. 1-4.
[8] M. Yukitoh, T. Oka, and M. Morimoto, “Recognition of overlapped objects using RGB-D sensor,” in 6th International Conference on Informatics, Electronics and Vision & in 7th International Symposium in Computational Medical and Health Technology, 2017, pp. 1-4.
[9] M. Morimoto and M. Yukitou, “A recognition method for overlapped objects using multiple RGB-D sensors,” World Automation Congress, 2018, pp. 1-5.
[10]W. d. S. Cotrim, V. P. R. Minim, L. B. Felix, and L. A. Minim, “Short convolutional neural networks applied to the recognition of the browning stages of bread crust,” Journal of Food Engineering, Volume 277, 2020, Art. no. 109916.
[11] S. H. Lee, C. S. Chan, S. J. Mayo and P. Remagnion, “How deep learning extracts and learns leaf features for plant classification,” Pattern Recognition, Volume 71, 2017, pp. 1-13.
[12] T. T. Dat et al, “Leaf recognition based on joint learning multiloss of multimodel convolutional neural networks: A testing for vietnamese herb,” Computational Intelligence and Neuroscience, 2021, Art. no. 5032359.
[13] A. Beikmohammadi, K. Faez, and A. Motallebi, “SWP-LeafNET: A novel multistage approach for plant leaf identification based on deep CNN,” Expert Systems with Applications, Volume 202, 2022, Art. no. 117470.
[14] S. A. Pearline and V. S. Kumarb, “Performance analysis of real-time plant species recognition using bilateral network combined with machine learning classifier,” Ecological Informatics, Volume 67, 2022, Art. no. 101492.
[15] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Computer Vision and Pattern Recognition, 2016, pp. 779-788.
[16] J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in Computer Vision and Pattern Recognition, 2017, pp. 6517-6525.
[17] J. Redmon and A. Farhadi, “Yolo v3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.
[18] Tsung-Yi Lin et al., “Feature pyramid networks for object detection,” arXiv preprint arXiv:1612.03144v2, 2016.
[19] M. Tan, R. Pang, and Q. V. Le, “EfficientDet: Scalable and efficient object detection,” in Computer Vision and Pattern Recognition, 2020, pp. 10778-10787.
[20] A. Bochkovskiy, C.-Y. Wang and H.-Y. M. Liao, “YOLOv4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
[21] C. Y. Wang et al., “CSPNet: A new backbone that can enhance learning capability of CNN,” Computer Vision and Pattern Recognition Workshop, 2020.
[22] G. S. Jocher et al., “ultralytics/yolov5,” 2022.
[23] C. Szegedy et al., “Going deeper with convolutions,” arXiv preprint arXiv:1409.4842, 2014.
[24] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” arXiv preprint arXiv:1512.03385, 2015.
[25] G. Huang, Z. Liu, L. V. D. Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” Computer Vision and Pattern Recognition, 2017, pp. 2261-2269.
[26] A. G. Howard et al., “MobileNets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861v1, 2017.
[27] M. Tan and Q. V. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” arXiv preprint arXiv:1905.11946v5, 2019.
[28] Y. Zhou, M. Wu, Y. Bai, and C. Guo, “Flame detection with pruned and knowledge distilled YOLOv5,” in 5th Asian Conference on Artificial Intelligence Technology, 2021, pp. 780-785.
[29] S. Liu, L. Qi, H. Qin, J. Shi and J. Jia, “Path aggregation network for instance segmentation,” arXiv preprint arXiv: 1803.01534, 2018.
[30]“Histograms - 2: Histogram equalization,” OpenCV. [Online]. Available: https://docs.opencv.org/3.1.0/d5/daf/tutorial_py_histogram_equalization.html. [Accessed: 28-Jul-2022].
[31] “Histograms - 2: Histogram equalization,” OpenCV. [Online]. Available: https://docs.opencv.org/4.x/d5/daf/tutorial_py_histogram_equalization.html. [Accessed: 15-Aug-2022]. |