參考文獻 |
[1] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh, “OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp.172-186, 2021.
[2] Alexander Toshev, and Christian Szegedy, “DeepPose: Human Pose Estimation via Deep Neural Networks” in Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition, pp.1653-1660, 2014.
[3] Jonathan Tompson, Arjun Jain, Yann LeCun, and Christoph Bregler, “Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation” in Advances in Neural Information Processing Systems, 2014.
[4] Alejandro Newell, Kaiyu Yang, and Jia Deng, “Stacked Hourglass Networks for Human Pose Estimation” in Proceedings of the European Conference on Computer Vision (ECCV), 2016.
[5] Han Yang, Ruimao Zhang, Xiaobao Guo, Wei Liu, Wangmeng Zuo, and Ping Luo, “Towards Photo-Realistic Virtual Try-On by Adaptively Generating Preserving Image Context” in arXiv: 2003.05863, 2020.
[6] Xintong Han, Zuxuan Wu, Zhe Wu, Ruichi Yu, and Larry S. Davis, “VITON: An Image-based Virtual Try-on Network” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp.7543-7552, 2018.
[7] Ke Gong, Xiaodan Liang, Dongyu Zhang, Xiaohui Shen, and Liang Lin, “Look into Person: Self-supervised Structure-sensitive Learning and A New Benchmark for Human Parsing” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp.932-940, 2017.
[8] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation” in MICCAI, pp.234-241, 2015.
[9] Liqian Ma, Xu Jia, Qianru Sun, Bernt Schiele, Tinne Tuytelaars, and Luc Van Gool, “Pose Guided Person Image Generation” in Advances in Neural Information Processing Systems, 2017.
[10] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu, “Spatial Transformer Networks” in Advances in Neural Information Processing Systems, pp.2017-2025, 2015.
[11] Jean Duchon, “Splines minimizing rotation-invariant seminorms in sobolev spaces” in Construction theory of functions of several variables, pp. 85-100, 1977.
[12] Shane Barratt, and Rishi Sharma, “A Note on the Inception Score” in arXiv preprint arXiv: 1801.01973, 2018.
[13] Bochao Wang, Huabin Zheng, Xiaodan Liang, Yimin Chen, Liang Lin, and Meng Yang, “Towards Characteristic-Preserving Image-based Virtual Try-On Network” in Proceedings of the European Conference on Computer Vision, pp. 589-603, 2018.
[14] Guilin Liu, Fitsum A. Reda, Kevin J. Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro, “Image Inpainting for Irregular Holes Using Partial Convolutions” in Proceedings of the European Conference on Computer Vision, pp. 85-100, 2018.
[15] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual Losses for Real-Time Style Transfer and Super-Resolution,” European Conference on Computer Vision, pp. 694-711, 2016.
[16] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative Adversarial Nets” in Proc. Advances Neural Information Processing Systems Conf., pp.2672-2680, 2014.
[17] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network” in arXin: 1609.04802, 2016.
[18] Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo, “StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation” in CVPR, pp.8789-8797, 2018.
[19] Ming-Yu Liu, and Oncel Tuzel, “Coupled Generative Adversarial Networks”, in Advances in Neural Information Processing Systems, pp.469-477, 2016
[20] Li-Chia Yang, Szu-Yu Chou, and Yi-Hsuan Yang, “MidiNet: A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation” in Proceedings of the 18th International Society for Music Information Retrieval Conference, pp.314-331, 2017.
[21] Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang “MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment” in Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
[22] Olof Mogren, “C-RNN-GAN: Continuous recurrent neural networks with adversarial training” in arXiv preprint arXiv:1611.09904, 2016.
[23] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee, “Generative Adversarial Text to Image Synthesis”, in arXiv preprint arXiv: 1605.05396, 2016.
[24] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris Metaxas, “StackGAN: Text to Photo-Realistic Image Synthesis With Stacked Generative Adversarial Networks” in ICCV, 2017.
[25] Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei, Huang, and Xiaodong He, “AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks”, in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2018.
[26] Runde Li, Jinshan Pan, Zechao Li, and Jinhui Tang, “Single Image Dehazing via Conditional Generative Adversarial Network” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2018.
[27] Rui Qian, Robby T. Tan, Wenhan Yang, Jiajun Su, and Jiaying Liu, “Attentive Generative Adversarial Network for Raindrop Removal from A Single Image” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2482-2491, 2018.
[28] Lvmin Zhang, Yi Ji, and Xin Lin, “Style Transfer for Anime Sketches with Enhanced Residual U-net and Auxiliary Classifier GAN”, in Proceedings of Asian Con. Pattern Recognition, pp.506-511, 2017.
[29] Mihdi Mirza, and Simon Osindero, “Conditional Generative Adversarial Nets” in CoRR, abs/1411.1784, 2014.
[30] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros, “Image-To-Image Translation With Conditional Adversarial Networks” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.1125-1134, 2017.
[31] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro, “High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798-8807, 2018.
[32] Orest Kupyn, Volodymyr Budzan, Mykola Mykhailch, Dmytro Mishkin, and J. Matas, “DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8183-8192, 2018.
[33] Martin Arjovsky, and Leon Bottou, “Towards Principled Methods for Training Generative Adversarial Networks” in International Conference on Learning Representations, 2017.
[34] Martin Arjovsky, Soumith Chintala, Leon Bottou, “Wasserstein Generative Adversarial Network”, in arXiv: 1701.07875, 2017
[35] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville, “Improved Training of Wasserstein GANs” in CORR, abs/1704.00028, 2017
[36] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida, “Spectral Normalization for Generative Adversarial Networks” in International Conference on Learning Representation (ICLR), 2018
[37] Github項目- Openpose 關鍵點輸出格式: https://www.aiuai.cn/aifarm712.html
[38] Guosheng Lin, Anton Milan, Chunhua Shen, and Ian Reid, “RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5168-5177, 2017.
Anaconda介紹及安裝教學:https://medium.com/python4u/anaconda%E4%BB%8B%E7%B4%B9%E5%8F%8A%E5%AE%89%E8%A3 |