博碩士論文 106522065 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:73 、訪客IP:18.116.24.111
姓名 邱義翔(Yih-Shyang Chiu)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 改進生成對抗網路做相似且不平衡數據的二元分類
(Improving generative adversarial network for binary classification on similar and imbalance data)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 我們提出了一種半監督式的二元分類卷積網路,它結合了變分自動編碼器和深度卷積生成對抗網路,利用原始影像和生成影像的相似度來判斷類別。由於訓練時只需要使用其中一類的影像,因此這個方法不受類別之間數量差距的影響,適合用來做不平衡數據的分類。
  我們在這個系統中,對生成對抗網路做了兩類型的改進,第一類型是讓生成對抗網路的訓練更為穩定的改進。眾所皆知生成對抗網路效果好,但難訓練;除了很有可能遇到梯度消失或梯度爆炸等問題外,也很容易遇到模式坍塌,也就是生成影像缺乏多樣性的問題。第二類型的改進是讓生成對抗網路能夠學習到更好的特徵,使得它生成出來的影像能夠盡可能的接近訓練過的類別;即使輸入的影像不屬於訓練過的類別,也會生成類似訓練過的類別。
  在實驗中,我們以電子元件的X光影像為例,使用上述所提的系統再加上一個簡單的判定式來計算原始影像和生成影像的相似度,最後在每個類別上都能得到接近94%的正確率。
摘要(英) We propose a semi-supervised convolutional neural network for binary classification, which combines variational autoencoder with generative adversarial network (GAN) to classify similar objects by thresholding the similarities between original images and generated images. Since we only use one-kind samples from the multi-class samples to train the model, this method won’t be affected by the imbalanced data; it means the method is suitable for imbalance data classification.
  There are two kinds of improvements in the proposed system, the first one is to improve the training stability of the GAN. It’s well-known that GANs are effective, but training GANs is hard since gradient vanishing, gradient exploding, and mode collapse could be encountered very easily. The second kind of improvements is to make GANs learning better features so that any generated image could look as close as possible to the trained class, even if the input images do not belong to the trained class.
  We used X-ray images of electronic components as examples in our experiment. We got nearly 94% true positive rate for every classes by using a simple similarity criterion.
關鍵字(中) ★ 生成對抗網路
★ 二元分類
★ 不平衡數據
關鍵字(英) ★ generative adversarial network
★ binary classification
★ imbalance data
論文目次 摘要 i
Abstract ii
致謝 iii
目錄 iv
圖目錄 vi
表目錄 viii
第一章 緒論 1
1.1 研究動機 1
1.2 系統架構 2
1.3 論文特色 4
1.4 論文架構 5
第二章 相關研究 6
2.1 不平衡數據 6
2.2 生成對抗網路 7
第三章 改進的生成對抗網路 11
3.1 生成對抗網路 11
3.2 深度卷積生成對抗網路 17
3.3 瓦塞斯坦生成對抗網路 27
3.4 殘差自注意力層 39
3.5 變分自動編碼器 43
3.6 改進的生成對抗網路架構 51
第四章 實驗與結果 56
4.1 實驗設備與環境 56
4.2 實驗方法 56
4.3 實驗資料 57
4.4 實驗及結果 60
第五章 結論與未來展望 64
參考文獻 65
參考文獻 [1] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, "SMOTE: synthetic minority over-sampling technique," arXiv:1106.1813 [cs.AI], 2011.
[2] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, "Focal loss for dense object detection," arXiv:1708.02002 [cs.CV], 2017.
[3] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, "Improved techniques for training GANs," arXiv:1606.03498 [cs.LG], 2016.
[4] S. Nowozin and B. Cseke, "f-GAN: training generative neural samplers using variational divergence minimization," arXiv:1606.00709 [stat.ML], 2016.
[5] X. Mao, Q. Li, H. Xie, R. Y.K. Lau, Z. Wang, and S. P. Smolley, "Least squares generative adversarial networks," arXiv:1611.04076 [cs.CV], 2016.
[6] J. Zhao, M. Mathieu, and Y. LeCun, "Energy-based generative adversarial networks," arXiv:1609.03126 [cs.LG], 2016.
[7] M. Arjovsky and L. Bottou, "Towards principled methods for training generative adversarial networks," arXiv:1701.04862 [stat.ML], 2017.
[8] M. Arjovsky, S. Chintala, and L. Bottou, "Wasserstein GAN," arXiv:1701.07875 [stat.ML], 2017.
[9] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, "Improved training of Wasserstein GANs," arXiv:1704.00028 [cs.LG], 2017.
[10] X. Wei, B. Gong, Z. Liu, W. Lu, L. Wang, "Improving the improved training of Wasserstein GANs: a consistency term and its dual effect," arXiv:1803.01541 [cs.CV], 2018.
[11] T. Karras, T. Aila, S. Laine, and J. Lehtinen, "Progressive growing of GANs for improved quality, stability, and variation," in Proc. of Int. Conf. on Learning Representations, Vancouver, Canada, Apr.30-May.3, 2018.
[12] M. Mirza and S. Osindero, "Conditional generative adversarial nets," arXiv:1411.1784 [cs.LG], 2014.
[13] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, "InfoGAN: interpretable representation learning by information maximizing generative adversarial nets," arXiv:1606.03657v1 [cs.LG], 2016.
[14] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets," in Proc. of Neural Information Processing Systems, Quebec, Canada, Dec.8-15, 2014, pp.2672-2680.
[15] A. Radford, L. Metz, and S. Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks," arXiv:1511.06434 [cs.LG], 2015.
[16] M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus, "Deconvolutional networks," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, San Francisco, CA, Jun.13-18, 2010, pp.2528-2535.
[17] S. Ioffe and C. Szegedy, "Batch normalization: accelerating deep network training by reducing internal covariate shift," arXiv:1502.03167 [cs.LG], 2015.
[18] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida, "Spectral normalization for generative adversarial networks," in Proc. of Int. Conf. on Learning Representations, Vancouver, Canada, Apr.30-May.3, 2018.
[19] H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena, "Self-attention generative adversarial networks," arXiv:1805.08318 [stat.ML], 2018.
[20] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, 2016, pp.770-778.
[21] D. P. Kingma and M. Welling, "Auto-encoding variational Bayes," arXiv:1312.6114 [stat.ML], 2013.
指導教授 曾定章(Din-Chang Tseng) 審核日期 2019-7-24
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明