參考文獻 |
[1] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville and Yoshua Bengio, "Generative Adversarial Networks," NIPS, 2014.
[2] Diederik P Kingma and Max Welling, “Auto-encoding variational bayes,” ICLR, 2014.
[3] Mehdi Mirza and Simon Osindero, “Conditional Generative Adversarial Nets,” arXiv:1411.1784, 2014.
[4] Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende and Daan Wierstra, “DRAW: A Recurrent Neural Network For Image Generation,” ICML, 2015.
[5] Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves and Koray Kavukcuoglu, “Conditional Image Generation with PixelCNN Decoders,” NIPS, 2016.
[6] Alec Radford, Luke Metz and Soumith Chintala, “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks,” ICLR, 2016.
[7] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele and Honglak Lee, “Generative Adversarial Text to Image Synthesis,” ICML, 2016.
[8] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang and Dimitris Metaxas, “StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks,” ICCV, 2017.
[9] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang and Dimitris Metaxas, “StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks,” arXiv:1710.10916, 2017.
[10] Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang and Xiaodong He, “AttnGAN: Fine-Grained Text to Image Generation With Attentional Generative Adversarial Networks,” CVPR, 2018.
[11] Andrew Brock, Jeff Donahue and Karen Simonyan, “Large Scale GAN Training for High Fidelity Natural Image Synthesis,” arXiv, 2018.
[12] Tero Karras, Timo Aila, Samuli Laine and Jaakko Lehtinen, “Progressive Growing of GANs for Improved Quality, Stability, and Variation,” arXiv:1710.10196, 2017.
[13] Scott Reed, Zeynep Akata, Bernt Schiele and Honglak Lee, “Learning Deep Representations of Fine-grained Visual Descriptions,” CVPR, 2016.
[14] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin, “Attention Is All You Need,” NIPS, 2017.
[15] Scott Reed, Zeynep Akata, Santosh Mohan, Samuel TenkaBernt Schiele and Honglak Lee, “Learning What and Where to Draw,” NIPS, 2016.
[16] Volodymyr Mnih, Nicolas Heess, Alex Graves and Koray Kavukcuoglu, “Recurrent Models of Visual Attention,” NIPS, 2014.
[17] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel and Yoshua Bengio, “show attend and tell neural image caption generation with visual attention,” ICML, 2015.
[18] Tu Dinh Nguyen, Trung Le, Hung Vu and Dinh Phung, “Dual Discriminator Generative Adversarial Nets,” arXiv:1709.03831, 2017.
[19] Lantao Yu, Weinan Zhang, Jun Wang and Yong Yu, “SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient,” AAAI, 2017.
[20] Jingjing Xu, Xuancheng Ren, Junyang Lin and Xu Sun, “DP-GAN: Diversity-Promoting Generative Adversarial Network for Generating Informative and Diversified Text,” EMNLP, 2018.
[21] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Tom Duerig and Vittorio Ferrari, “The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale,” arXiv:1811.00982, 2018.
[22] Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva and Antonio Torralba, “A 10 million Image Database for Scene Recognition,” IEEE, 2017.
[23] Martin Arjovsky, Soumith Chintala and Léon Bottou, “Wasserstein GAN”.arXiv:1701.07875.
[24] S Hochreiter and J Schmidhuber, “Long short-term memory,” Neural Computation, 1997.
[25] Qiantong Xu, Gao Huang, Yang Yuan, Chuan Guo, Yu Sun, Felix Wu and Kilian Weinberger, “An empirical study on evaluation metrics of generative adversarial networks,” arXiv:1806.07755, 2018.
[26] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler and Sepp Hochreiter, “GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium,” NIPS, 2017.
[27] Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” arXiv:1810.04805v2, 2018.
|