參考文獻 |
[1] C. Guillemot and O. LeMeur, “Image inpainting: overview and recent advances,” IEEE Signal Processing Magazine, vol.31, no.1, pp.127-144, 2014.
[2] M. Bertalmio, G. Sapiro, V. Caselles, et al., “Image inpainting,” in Proc. ACM SIGGRAPH Conf., New Orleans, LA, Sep.23-28, 2000, pp.417-424.
[3] T. F. Chan and J. Shen, “Nontexture inpainting by curvature-driven diffusions,” Journal of Visual Communication and Image Representation, vol.12, no.4, pp.436-449, 2001.
[4] A. Telea, “An image inpainting technique based on the fast marching method,” Journal of Graphics Tools, vol.9, no.1, pp.23-34, 2004.
[5] C. Ballester, M. Bertalmio, V. Caselles, et al., “Filling-in by joint interpolation of vector fields and gray levels,” IEEE Trans. Image Processing, vol.10, no.8, pp.1200-1211, 2001.
[6] T. Chan and J. Shen, “Local inpainting models and tv inpainting,” SIAM Journal on Applied Mathematics, vol.62, no.3, pp.1019-1043, 2001.
[7] A. Levin, A. Zomet, and Y. Weiss, “Learning how to inpaint from global image statistics,” in Proc. IEEE In. Conf. on Computer Vision, Nice, France, Oct.13-16, 2003, pp.305-312.
[8] L. Y. Wei and M. Levoy, “Fast texture synthesis using tree-structured vector quantization,” in Proc. ACM SIGGRAPH Conf., New Orleans, LA, Sep.23-28, 2000, pp.479-488.
[9] M. Ashikhmin, “Synthesizing natural textures,” in Proc. ACM SIGGRAPH Conf. on Symp. Interactive 3D Graphics, Research Triangle Park, NC, Mar.19-21, 2001, pp.217-226.
[10] A. A. Efros and W. T. Freeman, “Image quilting for texture synthesis and transfer b1 b2 random placement input texture,” in Proc. ACM SIGGRAPH Conf., Los Angeles, CA, Aug.12-17, 2001, pp.341-346.
[11] V. Kolmogorov and R. Zabih, “What energy functions can be minimized via graph cuts?,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol.26, no.2, pp.147-159, 2004.
[12] P. Pérez, M. Gangnet, and A. Blake, “Poisson image editing,” ACM Trans. Graphics, vol.22, no.3, pp.313-318, 2003.
[13] A. Bugeau, M. Bertalmío, V. Caselles, et al., “A comprehensive framework for image inpainting,” IEEE Trans. Image Processing, vol.19, no.10, pp.2634-2645, 2010.
[14] J. Liu, P. Musialski, P. Wonka, et al., “Tensor completion for estimating missing values in visual data,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol.35, no.1, pp.208-220, 2013.
[15] D. L. Donoho, “Compressed sensing,” IEEE Trans. Information Theory, vol.52, no.4, pp.1289-1306, 2006.
[16] M. Elad, J. L. Starck, P. Querre, et al., “Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA),” Applied and Computational Harmonic Analysis, vol.19, no.3, pp.340-358, 2005.
[17] M. Elad and M. Aharon, “Image denoising via sparse and redundant representation over learned dictionaries,” IEEE Trans. Image Processing, vol.15, no.12, pp.3736-3745, 2006.
[18] J. Mairal, M. Elad, and G. Sapiro, “Sparse representation for color image restoration,” IEEE Trans. Image Processing, vol.17, no.1, pp.53-69, 2008.
[19] M. Elad, Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing, 1st ed., New York, NY, Springer Publishing Company, Incorporated, New York, NY, 2010.
[20] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. Advances In Neural Information Processing Systems, Lake Tahoe, NV, Dec.3-6, 2012, pp.1097-1105.
[21] Y .LeCun, B. E. Boser, J. S. Denker, et al., “Handwritten digit recognition with a back-propagation network,” in Proc. Advances in Neural Information Processing Systems, Lake Tahoe, NV, Nov.26-29, 1990, pp.396-404.
[22] Y. LeCun, L. Bottou, Y. Bengio, et al., “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol.86, no.11, pp.2278-2323, 1998.
[23] C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol.20, no.3, pp.273-297, 1995.
[24] N. Srivastava, G. Hinton, A. Krizhevsky, et al., “Dropout: a simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol.15, no.1, pp.1929-1958, 2014.
[25] M. Zeiler, D. Krishnan, G. Taylor, et al., “Deconvolutional networks for feature learning,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, San Francisco, CA, Jun.13-18, 2010, pp.2528-2535.
[26] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Proc. European Conf. on Computer Vision, Zurich, Switzerland, Sep.8-11, 2014, pp.818-833.
[27] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Boston, MA, Jun.7-12, 2015, pp.3431-3440.
[28] A. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks are easily fooled,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Boston, MA, Jun.7-12, 2015, pp.427-436.
[29] I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., “Generative adversarial nets,” in Proc. Advances in Neural Information Processing Systems, Montreal, Quebec, Canada, Dec.8-13, 2014, pp.2672-2680.
[30] D. Pathak, J. Donahue, and A. A. Efros, “Context encoders : feature learning by inpainting,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, Jun.26-Jul.1, 2016, pp.2536-2544.
[31] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Analysis and Machine Intelligence, Preprint, 2017.
[32] S. Iizuka, E. Simo-Serra, and H. Ishikawa, “Globally and locally consistent image completion,” ACM Trans. Graphics, vol.36, no.4, p.107:1-107:14, 2017.
[33] J. T. Springenberg, A. Dosovitskiy, T. Brox, et al., “Striving for simplicity: the all convolutional net,” in Proc. Int. Conf. for Learning Representations (workshop track), May 7-9, 2015.
[34] F.Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” in Proc. Int. Conf. for Learning Representations, San Diego, CA, May 2-4, 2016.
[35] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. Int. Conf. for Learning Representations, San Diego, CA, May 7-9, 2015.
[36] S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” in Proc. Int. Conf. on Machine Learning, Lille, France, Jul.6-11, 2015, pp.448-456.
[37] R. Yeh, C. Chen, T. Y. Lim, et al., “Semantic image inpainting with perceptual and contextual losses,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, HI, Jul.21-26, 2017.
[38] D. P. Kingma and J. L. Ba, “Adam: a method for stochastic optimization,” in Proc. Int. Conf. for Learning Representations., San Diego, CA, May 7-9, 2015.
[39] T. Salimans, I. Goodfellow, W. Zaremba, et al., “Improved techniques for training gans,” in Proc. Advances in Neural Information Processing Systems, San Diego, CA, Dec.5-10, 2016, pp.2234-2242.
[40] B. C. Russell, A. Torralba, K. P. Murphy, et al., “Labelme: a database and web-based tool for image annotation,” Int. Journal of Computer Vision, vol.77, no.1-3, pp.157-173, 2008.
[41] Jia Deng, Wei Dong, R. Socher, et al., “Imagenet: a large-scale hierarchical image database,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Miami, FL, Jun.20-25, 2009, pp.248-255.
[42] Y. Lin, J. B. Michel, E. L. Aiden, et al., “Syntactic annotations for the google books ngram corpus,” in Proc. ACL Conf. System Demonstrations, Jeju Island, Korea, Jul.9-11, 2012, pp.169-174.
[43] M. Abadi, A. Agarwal, P. Barham, et al., Tensorflow: Large-Scale Machine Learning on Heterogeneous Distributed Systems, Technique, Report on tensorflow.org, 2015.
[44] C. Barnes, E. Shechtman, A. Finkelstein, et al., “Patchmatch: a randomized correspondence algorithm for structural image editing,” ACM Trans. Graphics, vol.28, no.3, p.24:1-24:11, 2009. |