參考文獻 |
[1] P. Hsu and B. Y. Chen, “Blurred image detection and classification,” in Proc. International Conference on Multimedia Modeling, pp. 277-286, Jan. 2008.
[2] S. Lee and S. Cho, “Recent advances in image deblurring,” in Proc. SIGGRAPH Asia 2013 Courses, pp.1-108, Nov. 2013.
[3] E. O. Brigham and R. E. Morrow, “The fast Fourier transform,” IEEE spectrum, Vol. 4, No. 12, pp. 63-70, Dec. 1967.
[4] L. Lucy, “An iterative technique for the rectification of observed distributions,” Astronomical Journal, Vol. 79, pp. 745-754, 1974.
[5] J. Sun, W. Cao, Z. Xu, and J. Ponce, “Learning a convolutional neural network for non-uniform motion blur removal,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 769-777, June 2015.
[6] A. Chakrabarti, “A neural approach to blind motion deblurring,” in European Conference on Computer Vision, pp. 221–235, Oct. 2016.
[7] X. Tao, H. Gao, X. Shen, J. Wang, and J. Jia, “Scale-recurrent network for deep image deblurring,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 8174-8182, Dec. 2018.
[8] X. Shi, Z. Chen, H. Wang, D. Y. Yeung, W. K. Wong, and W. C. Woo, “Convolutional LSTM network: a machine learning approach for precipitation nowcasting,” in Proc. 28th International Conference on Neural Information Processing System, Vol. 1, pp.802-810, Dec. 2015.
[9] H. Gao, X. Tao, X. Shen, and J. Jia, "Dynamic scene deblurring with parameter selective sharing and nested skip connections," in Proc. IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp. 3848-3856, June 2019.
[10] O. Kupyn, T. Martyniuk, J. Wu, and Z. Wang, “Deblurgan-v2: deblurring (orders-of-magnitude) faster and better,” in Pro. IEEE International Conference on Computer Vision, pp. 8878-8887, Aug. 2019.
[11] T. Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proc. IEEE conference on Computer Vision and Pattern Recognition, pp. 2117-2125, July 2017.
[12] J. U. Yun, B. Jo, and I. K. Park, “Joint face super-resolution and deblurring using generative adversarial network,” IEEE Access, Vol. 8, pp.159661-159671, Aug. 2020.
[13] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M. H. Yang, and L. Shao, “Multi-stage progressive image restoration,” in Proc. IEEE conference on Computer Vision and Pattern Recognition, June 2021.
[14] S. Nah, T. H. Kim, and K. M. Lee, “Deep multi-scale convolutional neural network for dynamic scene deblurring,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 3883-3891, July 2017.
[15] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, Vol. 60, pp.84-90, June 2017.
[16] M. Suin, K. Purohit, and A. N. Rajagopalan, “Spatially-attentive patch-hierarchical network for adaptive motion deblurring,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 3606-3615, Aug. 2020.
[17] H. Zhang, Y. Dai, H. Li, and P. Koniusz, “Deep stacked hierarchical multi-patch network for image deblurring,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 5978-5986, June 2019.
[18] K. Purohit and A. N. Rajagopalan, “Region-adaptive dense network for efficient motion deblurring,” in Proc. AAAI Conference on Artificial Intelligence, Vol. 34, No. 07, pp. 11882-11889, Apr. 2020.
[19] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proc. IEEE conference on Computer Vision and Pattern Recognition, pp. 4700-4708, July 2017.
[20] Y. Yuan, W. Su, and D. Ma, “Efficient dynamic scene deblurring using spatially variant deconvolution network with optical flow guided training,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 3555-3564, June 2020.
[21] F. J. Tsai, Y. T. Peng, Y. Y. Lin, C. C. Tsai, and C. W. Lin, “BANet: Blur-aware attention networks for dynamic scene deblurring,” arXiv preprint arXiv:2101.07518, Jan. 2021.
[22] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio, “Generative adversarial nets,” in Proc. Neural Information Processing Systems, pp. 2672-2680, Dec. 2014.
[23] H. Thanh-Tung and T. Tran, “Catastrophic forgetting and mode collapse in GANs,” in Proc. International Joint Conference on Neural Networks (IJCNN), pp. 1-10, July 2020.
[24] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in Proc. International Conference on Machine Learning(CML), pp.214-223, July 2017.
[25] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Improved training of wasserstein gans,” in Proc. International Conference on Neural Information Processing Systems(NIPS), pp. 5769-5779, Dec. 2017.
[26] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida, “Spectral normalization for generative adversarial networks,” in Proc. International Conference on Learning Representations, Feb. 2018.
[27] J. Heinonen, “Lectures on Lipschitz analysis,” in University of Jyväskylä, 2005.
[28] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, “Least squares generative adversarial networks,” in Proc. IEEE International Conference on Computer Vision(ICCV), pp. 2794-2802, Oct. 2017.
[29] S. Ramakrishnan, S. Pachori, A. Gangopadhyay, and S. Raman, “Deep generative filter for motion deblurring,” in Proc. IEEE International Conference on Computer Vision Workshops, pp. 2993-3000, Sep. 2017.
[30] M. Mirza, and S. Osindero, “Conditional generative adversarial nets,” in arXiv preprint arXiv:1411.1784, Nov. 2014.
[31] S. Zheng, Z. Zhu, J. Cheng, Y. Guo, and Y. Zhao, “Edge heuristic GAN for non-uniform blind deblurring,” IEEE Signal Processing Letters, pp. 1546-1550, July 2019.
[32] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in Proc. European Conference on Computer Vision, pp. 694-711, March 2016.
[33] J. Pan, D. Sun, H. Pfister, and M.-H. Yang, “Blind image deblurring using dark channel prior,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp. 1628-1636, June 2016.
[34] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. International Conference on Learning Representations(ICLR), pp. 1-14, May 2015.
[35] H. Tomosada, T. Kudo, T. Fujisawa, and M. Ikehara, “GAN-Based Image Deblurring Using DCT Discriminator,” in Proc. 25th IEEE International Conference on Pattern Recognition, pp. 3675-3681, Jan. 2021.
[36] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-ResNet and the impact of residual connections on learning,” in 35th AAAI Conference on Artificial Intelligence, pp. 4278-4284, Feb. 2017.
[37] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: inverted residuals and linear bottlenecks,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp.4510-4520, June 2018.
[38] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251-1258, July 2017.
[39] J. Rim, H. Lee, J. Won, and S. Cho, “Real-world blur dataset for learning and benchmarking deblurring algorithms,” in Proc. European Conference on Computer Vision, pp. 184-201, Aug. 2020.
[40] A. Hore, amd D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Proc. International Conference on Pattern Recognition, pp. 2366-2369, Aug. 2010.
[41] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, Vol. 13, No. 4, pp. 600-612, April 2004. |