參考文獻 |
[1] R. Smith, “An overview of the tesseract ocr engine,” in Ninth international conference on document analysis and recognition (ICDAR 2007), IEEE, vol. 2, 2007, pp. 629–633.
[2] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation,
vol. 9, no. 8, pp. 1735–1780, 1997.
[3] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint
arXiv:1411.1784, 2014.
[4] A. Ecins, C. Fermüller, and Y. Aloimonos, “Shadow free segmentation in still images using local density measure,” in 2014 IEEE International Conference on Computational Photography (ICCP), 2014, pp. 1–8.
[5] M. Zhang, W. Zhao, X. Li, and D. Wang, “Shadow detection of moving objects in traffic monitoring video,” in 2020 IEEE 9th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), vol. 9, 2020, pp. 1983–1987.
[6] J. Wang, X. Li, and J. Yang, “Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal,” in Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, 2018, pp. 1788–1797.
[7] L. Qu, J. Tian, S. He, Y. Tang, and R. W. Lau, “Deshadownet: A multi-context embedding deep network for shadow removal,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4067–4075.
[8] G. Finlayson, S. Hordley, C. Lu, and M. Drew, “On the removal of shadows from images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 1, pp. 59–68, 2006.
[9] R. Guo, Q. Dai, and D. Hoiem, “Single-image shadow detection and removal using paired regions,” in CVPR 2011, IEEE, 2011, pp. 2033–2040.
[10] D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 603–619, 2002.
[11] V. Nguyen, T. F. Yago Vicente, M. Zhao, M. Hoai, and D. Samaras, “Shadow detection with conditional generative adversarial networks,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 4510–4518.
[12] X. Hu, Y. Jiang, C.-W. Fu, and P.-A. Heng, “Mask-shadowgan: Learning to remove shadows from unpaired data,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 2472–2481.
[13] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223–2232.
[14] Z. Liu, H. Yin, Y. Mi, M. Pu, and S. Wang, “Shadow removal by a lightness-guided network with training on unpaired data,” IEEE Transactions on Image Processing, vol. 30, pp. 1853–1865, 2021.
[15] Y. Jin, A. Sharma, and R. T. Tan, “Dc-shadownet: Single-image hard and soft shadow removal using unsupervised domain-classifier guided network,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5027–5036.
[16] L. Guo, S. Huang, D. Liu, H. Cheng, and B. Wen, “Shadowformer: Global context helps image shadow removal,” arXiv preprint arXiv:2302.01650, 2023.
[17] A. Vaswani, N. Shazeer, N. Parmar, et al., “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
[18] S. Jung, M. A. Hasan, and C. Kim, “Water-filling: An efficient algorithm for digitized document shadow removal,” in Computer Vision–ACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, December 2–6, 2018, Revised Selected Papers, Part I 14, Springer, 2019, pp. 398–414.
[19] B. Wang and C. L. P. Chen, “Local water-filling algorithm for shadow detection and removal of document images,” Sensors, vol. 20, no. 23, 2020.
[20] S. Bako, S. Darabi, E. Shechtman, J. Wang, K. Sunkavalli, and P. Sen, “Removing shadows from images of documents,” Asian Conference on Computer Vision (ACCV 2016), 2016.
[21] N. Kligler, S. Katz, and A. Tal, “Document enhancement using visibility detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2374–2382.
[22] J.-R. Wang and Y.-Y. Chuang, “Shadow removal of text document images by estimating local and global background colors,” in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 1534–1538.
[23] K. Nazeri, E. Ng, T. Joseph, F. Qureshi, and M. Ebrahimi, “Edgeconnect: Structure
guided image inpainting using edge prediction,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Oct. 2019.
[24] I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., “Generative adversarial nets,” in Advances in Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Weinberger, Eds., vol. 27, Curran Associates, Inc., 2014.
[25] J. Gauthier, “Conditional generative adversarial nets for convolutional face generation,” Class project for Stanford CS231N: convolutional neural networks for visual recognition, Winter semester, vol. 2014, no. 5, p. 2, 2014.
[26] D. Michelsanti and Z.-H. Tan, “Conditional generative adversarial networks for speech enhancement and noise-robust speaker verification,” arXiv preprint arXiv:1709.01703, 2017.
[27] H. Park, Y. Yoo, and N. Kwak, “Mc-gan: Multi-conditional generative adversarial network for image synthesis,” arXiv preprint arXiv:1805.01123, 2018.
[28] H. Zhang, V. Sindagi, and V. M. Patel, “Image de-raining using a conditional generative adversarial network,” IEEE transactions on circuits and systems for video technology, vol. 30, no. 11, pp. 3943–3956, 2019.
[29] S. Murali, M. R. Rajati, and S. Suryadevara, “Image generation and style transfer using conditional generative adversarial networks,” in 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA), 2019, pp. 1415–1419.
[30] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1125–1134.
[31] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, Springer, 2015, pp. 234–241.
[32] A. Aghabiglou and E. M. Eksioglu, “Projection-based cascaded u-net model for mr image reconstruction,” Computer Methods and Programs in Biomedicine, vol. 207, p. 106 151, 2021.
[33] N. Siddique, S. Paheding, C. P. Elkin, and V. Devabhaktuni, “U-net and its variants for medical image segmentation: A review of theory and applications,” Ieee Access, vol. 9, pp. 82 031–82 057, 2021.
[34] A. Kar and K. Deb, “Moving cast shadow detection and removal from video based on hsv color space,” in 2015 International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), IEEE, 2015, pp. 1–6.
[35] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning, pmlr, 2015, pp. 448–456.
[36] J. A. Hartigan and M. A. Wong, “Algorithm as 136: A k-means clustering algorithm,” Journal of the royal statistical society. series c (applied statistics), vol. 28, no. 1, pp. 100–108, 1979.
[37] H. Kim and S. Kim, “Automated target detection using k-means based on per-norm for invariant illumination in hyperspectral image,” in 2015 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), 2015, pp. 570–572.
[38] C. Clausner, A. Antonacopoulos, and S. Pletschacher, “Icdar2017 competition on recognition of documents with complex layouts - rdcl2017,” in 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), vol. 01, 2017, pp. 1404–1410.
[39] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. |