博碩士論文 109423028 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:48 、訪客IP:3.145.191.214
姓名 齊秉宏(Ping-Hung Chi)  查詢紙本館藏   畢業系所 資訊管理學系
論文名稱
(A Novel Auto-encoder Task-based Similarity Continual Learning)
相關論文
★ 台灣50走勢分析:以多重長短期記憶模型架構為基礎之預測★ 以多重遞迴歸神經網路模型為基礎之黃金價格預測分析
★ 增量學習用於工業4.0瑕疵檢測★ 遞回歸神經網路於電腦零組件銷售價格預測之研究
★ 長短期記憶神經網路於釣魚網站預測之研究★ 基於深度學習辨識跳頻信號之研究
★ Opinion Leader Discovery in Dynamic Social Networks★ 深度學習模型於工業4.0之機台虛擬量測應用
★ A Novel NMF-Based Movie Recommendation with Time Decay★ 以類別為基礎sequence-to-sequence模型之POI旅遊行程推薦
★ A DQN-Based Reinforcement Learning Model for Neural Network Architecture Search★ Neural Network Architecture Optimization Based on Virtual Reward Reinforcement Learning
★ 生成式對抗網路架構搜尋★ 以漸進式基因演算法實現神經網路架構搜尋最佳化
★ Enhanced Model Agnostic Meta Learning with Meta Gradient Memory★ 遞迴類神經網路結合先期工業廢水指標之股價預測研究
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2027-7-1以後開放)
摘要(中) 在巨量資料的時代,很多資料因為過於龐大,無法在一次訓練中全部使用,因此必須將資料分開進行處理,然而機器學習以往都是針對當次的資料進行最佳化,會造成過去資料的遺忘,且目前的演算法在高層次的計算上耗時十分可觀,因此我們希望提出時間複雜度更低且維持準確率及遺忘率更好的機器學習演算法。災難性遺忘是一個在增進式學習中十分嚴重的問題,所謂的遺忘就是指當我們學習新的資料時,過去的資料沒辦法很好的維持導致遺忘了以往的資料。為了解決這個問題我們提出了這個新方法來解決問題。利用Auto-encoder可以將照片進行還原的能力,推算出每個任務彼此間的相似度,透過相似度影響模型的更新方向來解決所謂的災難性遺忘問題,並在解決問題的根基上去達到原本應有的準確度。我們將會應用這方法在MNIST Rotation、MNIST Permutations以及CIFAR-100進行驗證,並針對模型細節進行調整。最後我們也會將模型應用在目前世界上常用的工廠實際資料來驗證模型可行性。最後的結果也證明了這個方法可以得到比以往更好的結果,可以更有效的解決遺忘的問題。
摘要(英) Catastrophic forgetting is a serious problem in incremental learning. Model loses the information of the first task after training the second task. In the era of big data, data may be too large to be used in machine learning. The data need to be processed separately. During training, we need to issue data availability and resource scarcity. We need to make sure when we learn more data more the model learns and remember. Therefore, we would like to propose a machine learning algorithm with lower time complexity and better maintenance of accuracy and forgetting rate to improve the performance on catastrophic forgetting problem. Through our proposed method, we use Auto-encoder′s ability to restore the photos and derive the similarity of each task to each other. The similarity affects the update gradient direction of the model to solve the so-called catastrophic forgetting problem, and to achieve the original accuracy based on the solution. We will implement our approach on MNIST Rotation, MNIST Permutations, CIFAR-100, and a real-world dataset The experimental results proved that this method can get better results than before and can solve the problem of forgetfulness more effectively.
關鍵字(中) ★ 機器學習
★ 持續性學習
★ 增進式學習
★ 災難性遺忘
關鍵字(英) ★ Machine Learning
★ Continual Learning
★ Incremental Learning
★ Caatastrophic Forgetting
論文目次 中文摘要 i
Abstract ii
Table of content iv
List of Figures v
List of Tables vi
1. Introduction 1
2. Related Work 6
2.1 Continual Learning 6
2.2 Memory-based Continual Learning 7
2.3 Gradient-based Continual Learning 9
2.4 Continual Learning with Generative Models 10
2.5 Continual Learning Survey 12
3. Auto-encoder Task-based Similarity Continual Learning Model (ATS) 13
3.1 Gradient Projection 17
3.2 Auto-encoder 19
3.3 Training 21
4. Experimental Results 22
4.1 Datasets 22
4.2 Baseline Models 24
4.3 Evaluation Metric 25
4.4 Experiments 26
4.4.1 Performance on Different Number of Tasks 27
4.4.2 Performance on Different Sample per task of 20 Tasks 30
4.4.3 Performance on Different Number of Classes per Task 33
4.4.4 Performance on Auto-encoder Setting 35
4.4.5 Real Data Discussion 37
5. Conclusion 39
6. References 41
參考文獻 1. R. Zhang, T. Che, Z. Ghahramani, Y. Bengio, Y. Song, “MetaGAN: An Adversarial
Approach to Few-Shot Learning,” Advances in Neural Information Processing Systems 31
(NeurIPS), 2018
2. X. Tao, X. Hong, X. Chang, S. Dong, X. Wei, Y. Gong, “Few-Shot Class-Incremental
Learning,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), 2020, pp. 12183-12192
3. J. Rajasegaran, S.Khan, M. Hayat, F.S. Khan, M. Shah, “iTAML: An incremental task-
agnostic meta-learning approach,” Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), 2020, pp. 13588-13597
4. E. Belouadah, A. Popescu, “IL2M: Class Incremental Learning with Dual Memory,” 2019
IEEE/CVF International Conference on Computer Vision (ICCV), 2019.
5. A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P.K. Dokania, P.H. Torr, M.
Ranzato, “Continual learning with tiny episodic memories,” Proceedings of the 36th
International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019.
6. Y. Wu, Y. Chen, L.j. Wang, Y. Ye, Z. Liu, Y. Guo, Y. Fu,“Large Scale Incremental
Learning,” IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),
2019., pp. 374-382
7. S.W. Lee, J.H. Kim, J. Jun, J.W. Ha, B.T. Zhang, “Overcoming catastrophic forgetting by
incremental moment matching,” 31st Conference on Neural Information Processing Systems
(NIPS 2017), 2017
8. Z. Li, D. Hoiem, “Learning without Forgetting,” Computer Vision - 14th European
Conference, ECCV 2016, Proceedings, 2016, pp. 614-629
42
9. B. Zhao, X. Xiao, G. Gan, B. Zhang, S.T. Xia, “Maintaining discrimination and fairness
in class incremental learning,” Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition (CVPR), 2020, pp. 13208-13217.
10. J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan,
J. Quan, T. Ramalho, A. Grabska-Barwinska et al., “Overcoming catastrophic forgetting in
neural networks,” Proceedings of the National Academy of Sciences of the United States of
America (National Academy of Sciences)-Vol. 114, Iss: 13, 2017, pp 3521-3526.
11. A. Chaudhry, P. K. Dokania, T. Ajanthan, P. H. S. Torr, “Riemannian Walk for
Incremental Learning: Understanding Forgetting and Intransigence,” Proceedings of the
European Conference on Computer Vision (ECCV), 2018, pp. 532-547.
12. D. Lopez-Paz and R. Marc’Aurelio, “Gradient Episodic Memory for Continual Learning,”
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA,
USA., 2017
13. R. Aljundi, P. Chakravarty, T. Tuytelaars, “Expert Gate: Lifelong Learning with a
Network of Experts,” Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 2017, pp. 3366-3375.
14. M.-F. Balcan, A. Blum, S. Vempola. “Efficient representations for lifelong learning and
autoencoding,” JMLR: Workshop and Conference Proceedings vol 40:1–20, 2015
15. P. Dhar, R. V. Singh, K.-C. Peng, Z. Wu, R. Chellappa, “Learning without Memorizing,”
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR), 2019, pp. 5138-5146
43
16. F. M. Castro, M. J. Marín-Jiménez, N. Guil, C. Schmid, K. Alahari, “End-to-End
Incremental Learning,” Proceedings of the European Conference on Computer Vision
(ECCV), 2018, pp. 233-248.
17. S.-A. Rebuffi, A. Kolesnikov, G. Sperl, C. H. Lampert, “iCaRL: Incremental Classifier
and Representation Learning,” Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2017, pp. 2001-2010.
18. X. Jin, A. Sadhu, J. Du, X. Ren,“Gradient-based Editing of Memory Examples for Online
Task-free Continual Learning,” 35th Conference on Neural Information Processing Systems
(NeurIPS 2021), Sydney, Australia., 2021
19. Chaudhry, A., Rohrbach, M., Elhoseiny, M., Ajanthan, T., Dokania, P.K., Torr, P.H.,
Ranzato, M., “Continual learning with tiny episodic memories,” Proceedings of the 36 th
International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019.
20. Douillard, A., Cord, M., Ollion, C., Robert, T., Valle, E., “PODNet: Pooled Outputs
Distillation for Small-Tasks Incremental Learning,” ECCV 2020: Computer Vision – ECCV
2020 pp 86–102.
21. A. Prabhu, P. H.S. Torr, P. K. Dokania,“GDumb: A Simple Approach that Questions Our
Progress in Continual Learning,” ECCV 2020: Computer Vision – ECCV 2020 pp 524–540.
22. J. Zhang, J. Zhang, S. Ghosh, D. Li, S. Tasci, L. Heck, H. Zhang, C.-C. J. Kuo, “Class-
incremental Learning via Deep Model Consolidation,” Proceedings of the IEEE/CVF Winter
Conference on Applications of Computer Vision (WACV), 2020, pp. 1131-1140.
23. S. Hou, X. Pan, C. C. Loy, Z. Wang, D. Lin, “Learning a Unified Classifier Incrementally
via Rebalancing,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), 2019, pp. 831-839.
44
24. A. Chaudhry, M. Ranzato, M. Rohrbach, M. Elhoseiny ,“Efficient Lifelong Learning with
A-GEM,” Seventh International Conference on Learning Representations (ICLR), 2019.
25. R. Aljundi, M. Lin, B. Goujaud, Y. Bengio, “Gradient based sample selection for online
continual learning,” 33rd Conference on Neural Information Processing Systems (NeurIPS
2019), Vancouver, Canada., 2019.
26. Loshchilov, I., Hutter, F., “SGDR: Stochastic gradient descent with warm restarts,” fifth
International Conference on Learning Representations (ICLR),2017.
27. A. R. Triki, R. Aljundi, M. B. Blaschko, T. Tuytelaars, “Encoder Based Lifelong Learning,”
2017 IEEE International Conference on Computer Vision (ICCV), 2017.
28. C. V. Nguyen, Y. Li, Thang D. Bui, R. E. Turner, “Variational Continual Learning,” 6th
International Conference on Learning Representations (ICLR 2018), 2018.
29. H. Shin, J. K. Lee, J. Kim, J. Kim,“Continual Learning with Deep Generative Replay,”
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA,
USA, 2017.
30. D. Shim, Z. Mai, J. Jeong, S. Sanner, H. Kim, J. Jang,“Online Class-Incremental Continual
Learning with Adversarial Shapley Value,” Association for the Advancement of Artificial
Intelligence (AAAI), 2021
31. Z. Mai, R. Li, H. Kim, S. Sanner,“Supervised Contrastive Replay: Revisiting the Nearest
Class Mean Classifier in Online Class-Incremental Continual Learning,” 2021 IEEE/CVF
Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2021, pp.
3584-3594.
45
32. M. Zhai, L. Chen, F. Tung, J. He, M. Nawhal, G. Mori, “Lifelong GAN: Continual
Learning for Conditional Image Generation,” Proceedings of the IEEE/CVF International
Conference on Computer Vision (ICCV), 2019, pp. 2759-2768.
33. A. Rios ,L. Itti, “Closed-Loop Memory GAN for Continual Learning,” Proceedings of the
Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19),2019.
34. De Lange, M., Aljundi, R., Masana, M., Parisot, S., Jia, X., Leonardis, A., Slabaugh, G.,
Tuytelaars, T., “Continual learning: A comparative study on how to defy forgetting in
classification tasks,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI),
2022, pp. 3366-3385.
35. M. Masana, X. Liu, B. Twardowski, M. Menta, A. D. Bagdanov, J. van de Weijer, “Class-
incremental learning: survey and performance evaluation on image classification,”Arxiv, 2021.
36. Z. Mai, R. Li, J. Jeong, D. Quispe, H. Kim, S. Sanner, “Online Continual Learning in
Image Classification: An Empirical Survey,” Arxiv, 2021.
指導教授 陳以錚(Yi-Cheng Chen) 審核日期 2022-7-21
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明