參考文獻 |
[1] D. Wang, J. Liu, X. Fan, and R. Liu, "Unsupervised misaligned infrared and visible image fusion via cross-modality image generation and registration," arXiv preprint arXiv:2205.11876, 2022.
[2] L. Tang, Y. Deng, Y. Ma, J. Huang, and J. Ma, "SuperFusion: A Versatile Image Registration and Fusion Network with Semantic Awareness," IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 12, pp. 2121-2137, 2022, doi: 10.1109/JAS.2022.106082.
[3] J. Liu et al., "Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection," in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 18-24 June 2022 2022, pp. 5792-5801, doi: 10.1109/CVPR52688.2022.00571.
[4] L. Tang, J. Yuan, and J. Ma, "Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network," Information Fusion, vol. 82, pp. 28-42, 2022/06/01/ 2022, doi: https://doi.org/10.1016/j.inffus.2021.12.004.
[5] H. Zhang, H. Xu, X. Tian, J. Jiang, and J. Ma, "Image fusion meets deep learning: A survey and perspective," Information Fusion, vol. 76, pp. 323-336, 2021/12/01/ 2021, doi: https://doi.org/10.1016/j.inffus.2021.06.008.
[6] H. Li and X. J. Wu, "DenseFuse: A Fusion Approach to Infrared and Visible Images," IEEE Transactions on Image Processing, vol. 28, no. 5, pp. 2614-2623, 2019, doi: 10.1109/TIP.2018.2887342.
[7] H. Li, X. J. Wu, and T. Durrani, "NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial/Channel Attention Models," IEEE Transactions on Instrumentation and Measurement, vol. 69, no. 12, pp. 9645-9656, 2020, doi: 10.1109/TIM.2020.3005230.
[8] L. Tang, J. Yuan, H. Zhang, X. Jiang, and J. Ma, "PIAFusion: A progressive infrared and visible image fusion network based on illumination aware," Information Fusion, vol. 83-84, pp. 79-92, 2022/07/01/ 2022, doi: https://doi.org/10.1016/j.inffus.2022.03.007.
[9] J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, "FusionGAN: A generative adversarial network for infrared and visible image fusion," Information Fusion, vol. 48, pp. 11-26, 2019/08/01/ 2019, doi: https://doi.org/10.1016/j.inffus.2018.09.004.
[10] J. Ma, H. Xu, J. Jiang, X. Mei, and X. P. Zhang, "DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion," IEEE Transactions on Image Processing, vol. 29, pp. 4980-4995, 2020, doi: 10.1109/TIP.2020.2977573.
[11] A. Toet, "The TNO Multiband Image Data Collection," Data in Brief, vol. 15, pp. 249-251, 2017/12/01/ 2017, doi: https://doi.org/10.1016/j.dib.2017.09.038.
[12] H. Xu, J. Ma, J. Jiang, X. Guo, and H. Ling, "U2Fusion: A Unified Unsupervised Image Fusion Network," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 1, pp. 502-518, 2022, doi: 10.1109/TPAMI.2020.3012548.
[13] X. Jia, C. Zhu, M. Li, W. Tang, S. Liu, and W. Zhou, "LLVIP: A Visible-infrared Paired Dataset for Low-light Vision," arXiv e-prints, p. arXiv:2108.10831, 2021. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2021arXiv210810831J.
[14] M. Cordts et al., "The Cityscapes Dataset for Semantic Urban Scene Understanding," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 27-30 June 2016 2016, pp. 3213-3223, doi: 10.1109/CVPR.2016.350.
[15] E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo, "SegFormer: Simple and efficient design for semantic segmentation with transformers," Advances in Neural Information Processing Systems, vol. 34, pp. 12077-12090, 2021.
[16] C. Peng, T. Tian, C. Chen, X. Guo, and J. Ma, "Bilateral attention decoder: A lightweight decoder for real-time semantic segmentation," Neural Networks, vol. 137, pp. 188-199, 2021/05/01/ 2021, doi: https://doi.org/10.1016/j.neunet.2021.01.021.
[17] J. Liu, X. Fan, J. Jiang, R. Liu, and Z. Luo, "Learning a Deep Multi-Scale Feature Ensemble and an Edge-Attention Guidance for Image Fusion," IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 1, pp. 105-119, 2022, doi: 10.1109/TCSVT.2021.3056725.
[18] G. Qu, D. Zhang, and P. Yan, "Information measure for performance of image fusion," Electronics Letters, vol. 38, pp. 313-315, 04/28 2002, doi: 10.1049/el:20020212.
[19] H. R. Sheikh and A. C. Bovik, "Image information and visual quality," IEEE Transactions on Image Processing, vol. 15, no. 2, pp. 430-444, 2006, doi: 10.1109/TIP.2005.859378.
[20] V. Aslantas and E. Bendes, "A new image quality metric for image fusion: The sum of the correlations of differences," AEU - International Journal of Electronics and Communications, vol. 69, no. 12, pp. 1890-1896, 2015/12/01/ 2015, doi: https://doi.org/10.1016/j.aeue.2015.09.004.
[21] C. Xydeas and V. Petrovic, "Objective image fusion performance measure," Electronics Letters, vol. 36, pp. 308-309, 03/17 2000, doi: 10.1049/el:20000267. |