博碩士論文 105522041 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:41 、訪客IP:18.226.187.24
姓名 吳建穎(Jian-Ying Wu)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 以生成對抗式網路產生特定目的影像—以虹膜影像為例
(Using Generative Adversarial Network to Automatically Generate Images for Special Purpose: A Case Study for Particular Iris Images)
相關論文
★ 基於虹膜色彩空間的極端學習機的多類型頭痛分類★ 以多分數加權融合方式進行虹膜影像品質檢定
★ 基於深度學習之工業用智慧型機器視覺系統:以文字定位與辨識為例★ 基於深度學習的即時血壓估測演算法
★ 基於深度學習之工業用智慧型機器視覺系統:以焊點品質檢測為例★ 基於pix2pix深度學習模型之條件式虹膜影像生成架構
★ 以核方法化的相關濾波器之物件追蹤方法 實作眼動儀系統★ 雷射都普勒血流原型機之驗證與校正
★ 一種基於Faster R-CNN的快速虹膜切割演算法★ 運用深度學習、支持向量機及教導學習型最佳化分類糖尿病視網膜病變症狀
★ 應用卷積神經網路的虹膜遮罩預估★ Collaborative Drama-based EFL Learning with Mobile Technology Support in Familiar Context
★ 可用於自動訓練深度學習網路的網頁服務★ 基於深度學習方法之高精確度瞳孔放大片偵測演算法
★ 基於CNN方法之真假人臉識別模型★ 深度學習基礎模型與自監督學習
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 生成對抗式網路(Generative Adversarial Network,GAN)[1] 是目前人工智慧最熱門的研究之一。GAN是一種強大的生成模型,其想法源自於博弈論的二人零和博弈,由一個生成器和一個判別器所組成,並透過對抗式學習的方式來達到訓練的效果。在以CNN為主軸的影像辨識工作上,在實務上遇到的第一個困難就是,如何蒐集大量的影像以供深度學習網路的訓練及測試使用。在虹膜辨識的演算法開發上,也有類似的問題。這篇論文中,我們提出了一種新的條件式生成對抗式網路,我們結合了WGAN-GP與一個獨立的分類器,使其能達到我們想要結果。運用此研究方法,可以根據我們自行設定的條件,產生某些特殊的影像,以此解決在進行深度學習實驗時訓練資料影像不足的問題,讓實驗達到更好的結果。
摘要(英) Generative Adversarial Network (GAN) is one of the most popular researches in the field of artificial intelligence. GAN is a powerful generation model. The idea is de-rived from the two-person zero-sum game of game theory. It consists of a generator and a discriminator. By simultaneously training these two models via adversarial net, both will become more powerful for the task they are designed to achieve. In the work of image identification based on CNN, the first difficulty in practice is how to collect enough images for the training and testing of the deep learning network. There are similar problems in the development of iris recognition algorithm. We construct a WGAN-GP combined with independent classifier, to achieve the de-sired results. Using this method, we can generate special images according to our condi-tions to solve the problem of insufficient image of training data in the course of deep learning experiments, and therefore, enhance the final recognition accuracy for the de-sired tasks.
關鍵字(中) ★ 生成對抗式網路
★ 虹膜影像
★ 影像辨識
關鍵字(英) ★ Generative Adversarial Network
★ Iris image
★ Image identification
論文目次 中文摘要 i
英文摘要 ii
致謝 iv
目錄 v
圖目錄 vii
一、 緒論 1
1-1 研究背景 1
1-2 研究動機 3
1-3 論文架構 3
二、 生成對抗式網路介紹 4
2-1 GAN 4
2-2 Wasserstein GANs 6
2-3 WGAN-GP 11
2-4 CGAN與ACGAN 13
三、 方法介紹 16
3-1 方法架構 16
3-2 演算法介紹 17
3-3 延伸方法 19
四、 虹膜影像介紹及實驗結果 20
4-1虹膜影像介紹 20
4-2 實驗結果 22
4-2-1生成數字影像 22
4-2-2 生成虹膜影像 24
五、 結論與未來展望 27
5-1 結論 27
5-2 未來展望 27
六、 參考文獻 29
參考文獻 [1] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc., 2014.
[2]Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I. J., Bergeron, A., Bouchard, N., and Bengio, Y. (2012). Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop.
[3] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pages 2226–2234, 2016.
[4] Martin Arjovsky and L’eon Bottou. Towards principled methods for training generative adversarial networks. In International Conference on Learning Representations, 2017.
[5] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
[6] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of Wasserstein GANs. arXiv preprint arXiv:1704.00028, 2017.
[7]Mirza, Mehdi and Osindero, Simon. Conditional generative adversarial nets. CoRR, abs/1411.1784, 2014.
[8] A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016.
[9] D. Berthelot, T. Schumm, and L. Metz. Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017
[10] X. Mao, Q. Li, H. Xie, R. Y. Lau, and Z. Wang. Least squares generative adversarial networks. arXiv preprint arXiv:1611.04076, 2016.
[11] Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., and Abbeel, P. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets. ArXiv e-prints, June 2016.
[12] David Berthelot, Tom Schumm, and Luke Metz. BEGAN: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017.
[13] Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, Olivier Bousquet. Are GANs Created Equal? A Large-Scale Study. arXiv preprint arXiv:1711.10337,2017.
[14] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
[15] Yandong Wen, Kaipeng Zhangm, Zhifeng Li, Yu Qiao. A Discriminative Feature Learning Approach for Deep Face Recognition. Computer Vision – ECCV 2016,page 499-515, 2016.
[16] Sun, Y., Wang, X., Tang, X.: Deeply learned face representations are sparse, selective, and robust. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2892–2900 (2015)
[17] Schroff, F., Kalenichenko, D., Philbin, J.: Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815–823 (2015)
[18] Parkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. In: Proceedings of the British Machine Vision, vol. 1, no. 3, p. 6 (2015)
[19] Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: Deepface: closing the gap to human-level performance in face verification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701–1708 (2014)
[20] S. Bell, C. L. Zitnick, K. Bala, and R. Girshick. Insideoutside net: Detecting objects in context with skip pooling and recurrent neural networks. In CVPR, 2016.
[21] T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and ’ S. Belongie. Feature pyramid networks for object detection. In CVPR, 2017
[22] R. Girshick. Fast R-CNN. In ICCV, 2015
[23] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In NIPS, 2015.
[24] K. He, G. Gkioxari, P. Dollar, and R. B. Girshick. Mask ’ R-CNN. CoRR, abs/1703.06870, 2017
[25]http://biometrics.idealtest.org/dbDetailForUser.do?id=4.CAISA-IrisV4 database.
指導教授 栗永徽(Yung-Hui Li) 審核日期 2018-8-15
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明