摘要(英) |
Artificial intelligence is one of the topics that have been widely discussed in the field of computer science in recent years. Through the acquisition of feature learning from training data and using machine learning to create, the technology has also flourished. Among them, the best field for development is image generation.
This paper focuses on the automatic creation of cartoon images. It has premised that the same kind of training data is divided, grouped, and acquired the region relationship graph. Subsequent implementation of each region. Patching, deformation, assembling, and so on, create new images. Compared to the neural network that uses deep learning in Generative Adversarial Network (GAN), this paper adopts image processing method to implement it. Take advantage of computing time, data sets, and hardware resources.
The system proposed in this paper is divided into three stages. Considering that Region and Region in the original input image have covering effects, the segmented region will have shadowed depressions. The first stage of the system is to inpaint each region. For the diversity of creation, the second stage is deforming the regions. Finally, a template is randomly selected, and the modified Region is assembled and adjusted. The user can also adjust the parameters. The random parameters of the system and the user′s parameters will cross the countless combinations, creating a new image.
It can be seen in the experiment that if using a few training data, they can have new creations, and more training materials can create more diversity. The results can also identify objects of the same kind as the training data. |
參考文獻 |
[1] L. A. Gatys, A. S. Ecker, and M. Bethge. Image Style Transfer Using Convolutional Neural Networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2414–2423, 2016.
[2] H. Fang and M. Zhang. Creatism. A deep-learning photographer capable of creating professional work. ArXiv:1707.03491
[3] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio. Generative adversarial nets. In Proceedings of NIPS, pages 2672– 2680, 2014.
[4] Kingma, Durk P. and Welling, Max. Auto-encoding variational bayes. In Proceedings of the International Conference on Learning Representations (ICLR), 2014.
[5] A. v. d. Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. In International Conference on Machine Learning (ICML), 2016.
[6] R. Nock and F. Nielsen. Statistical region merging. IEEE Trans. Pattern Anal. Machine Intell., vol. 26, no. 11, pp. 1452–1458, Nov. 2004.
[7] 游孟航, “基於樣本學習自動合成創作卡通圖像”, 國立中央大學 資訊工程學系碩士論文, 2017
[8] Hayashi T, Ooi T. A Scoring Model of Figural Goodness and Its Application to Contour Completion.
[9] SCHAEFER, S., MCPHAIL, T., AND WARREN, J. 2006. Image deformation using moving least squares. ACM Trans. Graph. 25, 3, 533–540.
[10] D. Levin. The Approximation Power of Moving Least-Squares. Math. Computation, vol. 67, no. 224, 1998
[11] S. Tulsiani, H. Su, L. J. Guibas, A. A. Efros, and J. Malik. Learning shape abstractions by assembling volumetric primitives. CoRR, abs/1612.00404, 2016
[12] S. Gurumurthy, R. Kiran Sarvadevabhatla, and V. Babu Radhakrishnan. DeLiGAN. Generative Adversarial Networks for Diverse and Limited Data. ArXiv e-prints, June 2017 |