參考文獻 |
[1] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, "SMOTE: synthetic minority over-sampling technique," arXiv:1106.1813 [cs.AI], 2011.
[2] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, "Focal loss for dense object detection," arXiv:1708.02002 [cs.CV], 2017.
[3] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, "Improved techniques for training GANs," arXiv:1606.03498 [cs.LG], 2016.
[4] S. Nowozin and B. Cseke, "f-GAN: training generative neural samplers using variational divergence minimization," arXiv:1606.00709 [stat.ML], 2016.
[5] X. Mao, Q. Li, H. Xie, R. Y.K. Lau, Z. Wang, and S. P. Smolley, "Least squares generative adversarial networks," arXiv:1611.04076 [cs.CV], 2016.
[6] J. Zhao, M. Mathieu, and Y. LeCun, "Energy-based generative adversarial networks," arXiv:1609.03126 [cs.LG], 2016.
[7] M. Arjovsky and L. Bottou, "Towards principled methods for training generative adversarial networks," arXiv:1701.04862 [stat.ML], 2017.
[8] M. Arjovsky, S. Chintala, and L. Bottou, "Wasserstein GAN," arXiv:1701.07875 [stat.ML], 2017.
[9] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, "Improved training of Wasserstein GANs," arXiv:1704.00028 [cs.LG], 2017.
[10] X. Wei, B. Gong, Z. Liu, W. Lu, L. Wang, "Improving the improved training of Wasserstein GANs: a consistency term and its dual effect," arXiv:1803.01541 [cs.CV], 2018.
[11] T. Karras, T. Aila, S. Laine, and J. Lehtinen, "Progressive growing of GANs for improved quality, stability, and variation," in Proc. of Int. Conf. on Learning Representations, Vancouver, Canada, Apr.30-May.3, 2018.
[12] M. Mirza and S. Osindero, "Conditional generative adversarial nets," arXiv:1411.1784 [cs.LG], 2014.
[13] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, "InfoGAN: interpretable representation learning by information maximizing generative adversarial nets," arXiv:1606.03657v1 [cs.LG], 2016.
[14] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets," in Proc. of Neural Information Processing Systems, Quebec, Canada, Dec.8-15, 2014, pp.2672-2680.
[15] A. Radford, L. Metz, and S. Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks," arXiv:1511.06434 [cs.LG], 2015.
[16] M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus, "Deconvolutional networks," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, San Francisco, CA, Jun.13-18, 2010, pp.2528-2535.
[17] S. Ioffe and C. Szegedy, "Batch normalization: accelerating deep network training by reducing internal covariate shift," arXiv:1502.03167 [cs.LG], 2015.
[18] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida, "Spectral normalization for generative adversarial networks," in Proc. of Int. Conf. on Learning Representations, Vancouver, Canada, Apr.30-May.3, 2018.
[19] H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena, "Self-attention generative adversarial networks," arXiv:1805.08318 [stat.ML], 2018.
[20] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, 2016, pp.770-778.
[21] D. P. Kingma and M. Welling, "Auto-encoding variational Bayes," arXiv:1312.6114 [stat.ML], 2013. |