參考文獻 |
[1] L. Tang, J. Yuan, and J. Ma, "Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network," Information Fusion, vol. 82, pp. 28-42, 2022/06/01/ 2022, doi: https://doi.org/10.1016/j.inffus.2021.12.004.
[2] H. Li and X. J. Wu, "DenseFuse: A Fusion Approach to Infrared and Visible Images," IEEE Transactions on Image Processing, vol. 28, no. 5, pp. 2614-2623, 2019, doi: 10.1109/TIP.2018.2887342.
[3] J. Hu, L. Shen, and G. Sun, "Squeeze-and-Excitation Networks," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18-23 June 2018 2018, pp. 7132-7141, doi: 10.1109/CVPR.2018.00745.
[4] Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, "IFCNN: A general image fusion framework based on convolutional neural network," Information Fusion, vol. 54, pp. 99-118, 2020/02/01/ 2020, doi: https://doi.org/10.1016/j.inffus.2019.07.011.
[5] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[6] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708.
[7] R. Hou et al., "VIF-Net: An Unsupervised Framework for Infrared and Visible Image Fusion," IEEE Transactions on Computational Imaging, vol. 6, pp. 640-651, 2020, doi: 10.1109/TCI.2020.2965304.
[8] H. Li, X.-J. Wu, and J. Kittler, "Infrared and visible image fusion using a deep learning framework," in 2018 24th international conference on pattern recognition (ICPR), 2018: IEEE, pp. 2705-2710.
[9] J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, "FusionGAN: A generative adversarial network for infrared and visible image fusion," Information Fusion, vol. 48, pp. 11-26, 2019/08/01/ 2019, doi: https://doi.org/10.1016/j.inffus.2018.09.004.
[10] H. Xu, P. Liang, W. Yu, J. Jiang, and J. Ma, "Learning a Generative Model for Fusing Infrared and Visible Images via Conditional Generative Adversarial Network with Dual Discriminators," presented at the Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, 2019, 2019. [Online]. Available: https://doi.org/10.24963/ijcai.2019/549.
[11] D. Han, L. Li, X. Guo, and J. Ma, "Multi-exposure image fusion via deep perceptual enhancement," Information Fusion, vol. 79, pp. 248-262, 2022/03/01/ 2022, doi: https://doi.org/10.1016/j.inffus.2021.10.006.
[12] H. Xu, J. Ma, J. Jiang, X. Guo, and H. Ling, "U2Fusion: A Unified Unsupervised Image Fusion Network," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 1, pp. 502-518, 2022, doi: 10.1109/TPAMI.2020.3012548.
[13] X. Jia, C. Zhu, M. Li, W. Tang, and W. Zhou, "LLVIP: A visible-infrared paired dataset for low-light vision," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3496-3504.
[14] A. Toet, "TNO Image Fusion Dataset," ed: figshare, 2022.
[15] Q. Ha, K. Watanabe, T. Karasawa, Y. Ushiku, and T. Harada, "MFNet: Towards real-time semantic segmentation for autonomous vehicles with multi-spectral scenes," in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 24-28 Sept. 2017 2017, pp. 5108-5115, doi: 10.1109/IROS.2017.8206396.
[16] J. Liu, X. Fan, J. Jiang, R. Liu, and Z. Luo, "Learning a Deep Multi-Scale Feature Ensemble and an Edge-Attention Guidance for Image Fusion," IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 1, pp. 105-119, 2022, doi: 10.1109/TCSVT.2021.3056725.
[17] J. Liu et al., "Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5802-5811.
[18] J. Ma, L. Tang, F. Fan, J. Huang, X. Mei, and Y. Ma, "SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer," IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 7, pp. 1200-1217, 2022, doi: 10.1109/JAS.2022.105686. |