參考文獻 |
[1] 李昇龍. (2011). 基於增量學習之人臉辨識研究 [國立臺灣師範大學].
[2] 周正全. (2006). 晶圓打線檢測系統之研究 [國立交通大學].
[3] 邵文豪. (2006). 應用影像處理技術與類神經網路於TFT-LCD瑕疵辨識 [華梵大學].
[4] 范姜皓. (2019). AOI瑕疵影像深度學習卷積神經網路分類模型之研究 [國立臺灣科技大學].
[5] 張力. (2020). 人工智慧異常檢測輔助系統開發之研究 [國立臺灣師範大學].
[6] 張上淵. (2002). 應用電腦視覺與類神經網路於BGA檢測系統 [國立交通大學].
[7] 梁俊傑. (2002). DVD與VCD光碟表面瑕疵檢測系統之開發 [國立屏東科技大學].
[8] 莊育明. (2009). 以模糊集合理論改善支持向量機之增量學習演算法 [國立中山大學].
[9] 許辰合. (2002). 印刷電路板基本元件圖像之萃取系統 [國立成功大學].
[10] 陳彥仲. (2002). SMDPCB錫瑕疵與電阻缺錯件自動視覺檢測系統 [國立交通大學].
[11] 陳殿善. (2021). 針對類別增量學習的多錨點知識蒸餾和連續動態調整之特徵邊距 [國立陽明交通大學].
[12] 彭光裕. (2000). 應用電腦視覺技術於表面黏著元件印刷電路板之自動檢測新系統設計及開發 [國立交通大學].
[13] 費浩杰. (2022). 適合增量學習的高效邊緣模型部署研究 [國立臺灣大學].
[14] 黃國書. (2010). 晶粒圖紋瑕疵之自動檢測 [國立交通大學].
[15] 楊琮華. (2002). 電腦視覺輔助印刷電路板表面插件瑕疵檢測系統之開發 [國立屏東科技大學].
[16] 雷承勲. (2020). 設計高效且簡明的目標函數以提升增量學習之效能 [國立交通大學].
[17] 劉晉廷. (2021). 應用AOI檢測技術於連接器自動化設備 [國立高雄科技大學].
[18] 蔡陳杰. (2021). 以Centernet為基礎開發AOI補助系統之研究. 國立台灣師範大學.
[19] 鄭睿夫. (2002). 影像處理技術於載帶規範檢測之應用 [國立成功大學].
[20] 賴怡青. (2006). 整合ANN與SVM於金凸塊表面瑕疵分類之研究 [國立高雄師範大學].
[21] Aljundi, R., Chakravarty, P., & Tuytelaars, T. (2017). Expert gate: Lifelong learning with a network of experts. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3366-3375).
[22] Bertinetto, L., Henriques, J. F., Valmadre, J., Torr, P., & Vedaldi, A. (2016). Learning feed-forward one-shot learners. Advances in neural information processing systems, 29.
[23] Castro, F. M., Marín-Jiménez, M. J., Guil, N., Schmid, C., & Alahari, K. (2018). End-to-end incremental learning. In Proceedings of the European conference on computer vision (ECCV) (pp. 233-248).
[24] Chaudhry, A., Ranzato, M. A., Rohrbach, M., & Elhoseiny, M. (2018). Efficient lifelong learning with a-gem. arXiv preprint arXiv:1812.00420.
[25] Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., & Darrell, T. (2014, January). Decaf: A deep convolutional activation feature for generic visual recognition. In International conference on machine learning (pp. 647-655). PMLR.
[26] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
[27] Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., ... & Hadsell, R. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13), 3521-3526.
[28] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.
[29] Li, Z., & Hoiem, D. (2017). Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12), 2935-2947.
[30] Liu, Y., Su, Y., Liu, A. A., Schiele, B., & Sun, Q. (2020). Mnemonics training: Multi-class incremental learning without forgetting. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition (pp. 12245-12254).
[31] Lopez-Paz, D., & Ranzato, M. A. (2017). Gradient episodic memory for continual learning. Advances in neural information processing systems, 30.
[32] Rebuffi, S. A., Kolesnikov, A., Sperl, G., & Lampert, C. H. (2017). icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (pp. 2001-2010).
[33] Riemer, M., Cases, I., Ajemian, R., Liu, M., Rish, I., Tu, Y., & Tesauro, G. (2018). Learning to learn without forgetting by maximizing transfer and minimizing interference. arXiv preprint arXiv:1810.11910.
[34] Ruvolo, P., & Eaton, E. (2013, February). ELLA: An efficient lifelong learning algorithm. In International conference on machine learning (pp. 507-515). PMLR.
[35] Sun, Q., Liu, Y., Chen, Z., Chua, T. S., & Schiele, B. (2020). Meta-transfer learning through hard tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence.
[36] Tao, X., Hong, X., Chang, X., Dong, S., Wei, X., & Gong, Y. (2020). Few-shot class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 12183-12192).
[37] Wu, Y., Chen, Y., Wang, L., Ye, Y., Liu, Z., Guo, Y., & Fu, Y. (2019). Large scale incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 374-382).
[38] Xavier Glorot, and Yoshua Bengio, “Understanding the Difficulty of Training Deep Feedforward Neural Networks”, International Conference on Artificial Intelligence and Statistics (AISTATS), 2010.
[39] Xie, S., Girshick, R., Dollár, P., Tu, Z., & He, K. (2017). Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1492-1500).
[40] Yang, J., Li, S., Wang, Z., & Yang, G. (2019). Real-time tiny part defect detection system in manufacturing using deep learning. IEEE Access, 7, 89278-89291.
[41] Zadrozny, B. (2004, July). Learning and evaluating classifiers under sample selection bias. In Proceedings of the twenty-first international conference on Machine learning (p. 114).
[42] Zhao, B., Xiao, X., Gan, G., Zhang, B., & Xia, S. T. (2020). Maintaining discrimination and fairness in class incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 13208-13217). |