Citation: | X. Gao, Y. Gao, A. Dong, J. Cheng, and G. Lv, “HaIVFusion: Haze-free infrared and visible image fusion,” IEEE/CAA J. Autom. Sinica, 2025. doi: 10.1109/JAS.2024.124926 |
[1] |
J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey,” Information Fusion, vol. 45, pp. 153–178, 2019. doi: 10.1016/j.inffus.2018.02.004
|
[2] |
Q. Ha, K. Watanabe, T. Karasawa, Y. Ushiku, and T. Harada, “MFNet: Towards real-time semantic segmentation for autonomous vehicles with multi-spectral scenes,” in Proc. the IEEE Int. Conf. on Intelligent Robots and Systems, 2017, pp. 5108–5115.
|
[3] |
Y. Cao, D. Guan, W. Huang, J. Yang, Y. Cao, and Y. Qiao, “Pedestrian detection with unsupervised multispectral feature learning using deep neural networks,” Information Fusion, vol. 46, pp. 206–217, 2019. doi: 10.1016/j.inffus.2018.06.005
|
[4] |
R. Zhang, L. Li, Q. Zhang, J. Zhang, L. Xu, B. Zhang, and B. Wang, “Differential feature awareness network within antagonistic learning for infrared-visible object detection,” IEEE Trans. Circuits and Systems for Video Technology, pp. 1–1, 2023.
|
[5] |
R. Zhang, L. Xu, Z. Yu, Y. Shi, C. Mu, and M. Xu, “Deep-IRTarget: An automatic target detector in infrared imagery using dual-domain feature extraction and allocation,” IEEE Trans. Multimedia, vol. 24, pp. 1735–1749, 2021.
|
[6] |
S. Das and Y. Zhang, “Color night vision for navigation and surveillance,” Transportation Research Record, vol. 1708, no. 1, pp. 40–46, 2000. doi: 10.3141/1708-05
|
[7] |
S. Li, B. Yang, and J. Hu, “Performance comparison of different multi-resolution transforms for image fusion,” Information Fusion, vol. 12, no. 2, pp. 74–84, 2011. doi: 10.1016/j.inffus.2010.03.002
|
[8] |
J. Wang, J. Peng, X. Feng, G. He, and J. Fan, “Fusion method for infrared and visible images by using non-negative sparse representation,” Infrared Physics & Technology, vol. 67, pp. 477–489, 2014.
|
[9] |
X. Zhang, Y. Ma, F. Fan, Y. Zhang, and J. Huang, “Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition,” JOSA A, vol. 34, no. 8, pp. 1400–1410, 2017. doi: 10.1364/JOSAA.34.001400
|
[10] |
J. Mou, W. Gao, and Z. Song, “Image fusion based on non-negative matrix factorization and infrared feature extraction,” in Proc. the Int. Congress on Image and Signal Processing, vol. 2, 2013, pp. 1046–1050.
|
[11] |
Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Information Fusion, vol. 24, pp. 147–164, 2015. doi: 10.1016/j.inffus.2014.09.004
|
[12] |
H. Zhang, H. Xu, X. Tian, J. Jiang, and J. Ma, “Image fusion meets deep learning: A survey and perspective,” Information Fusion, vol. 76, pp. 323–336, 2021. doi: 10.1016/j.inffus.2021.06.008
|
[13] |
H. Li and X.-J. Wu, “Densefuse: A fusion approach to infrared and visible images,” IEEE Trans. Image Processing, vol. 28, no. 5, pp. 2614–2623, 2019. doi: 10.1109/TIP.2018.2887342
|
[14] |
G. Zhang, R. Nie, and J. Cao, “SSL-WAEIE: Self-supervised learning with weighted auto-encoding and information exchange for infrared and visible image fusion,” IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 9, pp. 1694–1697, 2022. doi: 10.1109/JAS.2022.105815
|
[15] |
L. Tang, Y. Deng, Y. Ma, J. Huang, and J. Ma, “SuperFusion: A versatile image registration and fusion network with semantic awareness,” IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 12, pp. 2121–2137, 2022. doi: 10.1109/JAS.2022.106082
|
[16] |
Q. Kong, H. Zhou, and Y. Wu, “NormFuse: Infrared and visible image fusion with pixel-adaptive normalization,” IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 12, pp. 2190–2192, 2022. doi: 10.1109/JAS.2022.106112
|
[17] |
J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Information Fusion, vol. 48, pp. 11–26, 2019. doi: 10.1016/j.inffus.2018.09.004
|
[18] |
J. Ma, L. Tang, F. Fan, J. Huang, X. Mei, and Y. Ma, “SwinFusion: Cross-domain long-range learning for general image fusion via swin transformer,” IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 7, pp. 1200–1217, 2022. doi: 10.1109/JAS.2022.105686
|
[19] |
L. Tang, X. Xiang, H. Zhang, M. Gong, and J. Ma, “DIVFusion: Darkness-free infrared and visible image fusion,” Information Fusion, vol. 91, pp. 477–493, 2023. doi: 10.1016/j.inffus.2022.10.034
|
[20] |
Z. Jingyun, D. Yifan, Y. Yi, and S. Jiasong, “Real-time defog model based on visible and near-infrared information,” in Proc. the IEEE Int. Conf. on Multimedia & Expo Workshops, 2016, pp. 1–6.
|
[21] |
J. Liu, X. Fan, Z. Huang, G. Wu, R. Liu, W. Zhong, and Z. Luo, “Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection,” in Proc. the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2022, pp. 5802–5811.
|
[22] |
H. Li, X.-J. Wu, and T. Durrani, “NestFuse: An infrared and visible image fusion architecture based on nest connection and spatial/channel attention models,” IEEE Trans. Instrumentation and Measurement, vol. 69, no. 12, pp. 9645–9656, 2020. doi: 10.1109/TIM.2020.3005230
|
[23] |
H. Li, X.-J. Wu, and J. Kittler, “RFN-Nest: An end-to-end residual fusion network for infrared and visible images,” Information Fusion, vol. 73, pp. 72–86, 2021. doi: 10.1016/j.inffus.2021.02.023
|
[24] |
H. Xu, H. Zhang, and J. Ma, “Classification saliency-based rule for visible and infrared image fusion,” IEEE Trans. Computational Imaging, vol. 7, pp. 824–836, 2021. doi: 10.1109/TCI.2021.3100986
|
[25] |
H. Zhang, H. Xu, Y. Xiao, X. Guo, and J. Ma, “Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity,” in Proc. the AAAI Conf. on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 12797–12804.
|
[26] |
J. Ma, L. Tang, M. Xu, H. Zhang, and G. Xiao, “STDFusionNet: An infrared and visible image fusion network based on salient target detection,” IEEE Trans. Instrumentation and Measurement, vol. 70, pp. 1–13, 2021.
|
[27] |
L. Tang, J. Yuan, and J. Ma, “Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network,” Information Fusion, vol. 82, pp. 28–42, 2022. doi: 10.1016/j.inffus.2021.12.004
|
[28] |
L. Tang, H. Zhang, H. Xu, and J. Ma, “Rethinking the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity,” Information Fusion, vol. 99, p. 101870, 2023. doi: 10.1016/j.inffus.2023.101870
|
[29] |
L. Tang, J. Yuan, H. Zhang, X. Jiang, and J. Ma, “PIAFusion: A progressive infrared and visible image fusion network based on illumination aware,” Information Fusion, vol. 83, pp. 79–92, 2022.
|
[30] |
H. Xu, J. Yuan, and J. Ma, “MURF: Mutually reinforcing multi-modal image registration and fusion,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 45, pp. 12148–12166, 2023. doi: 10.1109/TPAMI.2023.3283682
|
[31] |
J. Ma, H. Xu, J. Jiang, X. Mei, and X.-P. Zhang, “DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion,” IEEE Trans. Image Processing, vol. 29, pp. 4980–4995, 2020. doi: 10.1109/TIP.2020.2977573
|
[32] |
J. Li, H. Huo, C. Li, R. Wang, and Q. Feng, “AttentionFGAN: Infrared and visible image fusion using attention-based generative adversarial networks,” IEEE Trans. Multimedia, vol. 23, pp. 1383–1396, 2020.
|
[33] |
Z. Wang, Y. Chen, W. Shao, H. Li, and L. Zhang, “SwinFuse: A residual swin transformer fusion network for infrared and visible images,” IEEE Trans. Instrumentation and Measurement, vol. 71, pp. 1–12, 2022.
|
[34] |
E. J. McCartney, “Optics of the atmosphere: scattering by molecules and particles,” New York, 1976.
|
[35] |
K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2010.
|
[36] |
S. Salazar-Colores, E. Cabal-Yepez, J. M. Ramos-Arreguin, G. Botella, L. M. Ledesma-Carrillo, and S. Ledesma, “A fast image dehazing algorithm using morphological reconstruction,” IEEE Trans. Image Processing, vol. 28, no. 5, pp. 2357–2366, 2018.
|
[37] |
B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Processing, vol. 25, no. 11, pp. 5187–5198, 2016. doi: 10.1109/TIP.2016.2598681
|
[38] |
B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Aod-net: All-in-one dehazing network,” in Proc. the IEEE Int. Conf. on Computer Vision, 2017, pp. 4770–4778.
|
[39] |
H. Zhu, X. Peng, V. Chandrasekhar, L. Li, and J.-H. Lim, “Dehazegan: When image dehazing meets differential programming.” in IJCAI, 2018, pp. 1234–1240.
|
[40] |
X. Yang, Z. Xu, and J. Luo, “Towards perceptual image dehazing by physics-based disentanglement and adversarial training,” in Proc. the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
|
[41] |
J. Fan, F. Guo, J. Qian, X. Li, J. Li, and J. Yang, “Non-aligned supervision for real image dehazing,” arXiv preprint arXiv: 2303.04940, 2023.
|
[42] |
F. Fang, F. Li, and T. Zeng, “Single image dehazing and denoising: a fast variational approach,” SIAM Journal on Imaging Sciences, vol. 7, no. 2, pp. 969–996, 2014. doi: 10.1137/130919696
|
[43] |
C. O. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert, “A fast semi-inverse approach to detect and remove the haze from a single image,” in Proc. the Asian Conf. on Computer Vision, 2011, pp. 501–514.
|
[44] |
F. Dümbgen, M. El Helou, N. Gucevska, and S. Süsstrunk, “Near-infrared fusion for photorealistic image dehazing,” IS&T EI Proceedings, 2018.
|
[45] |
H. Xu, J. Ma, J. Jiang, X. Guo, and H. Ling, “U2Fusion: A unified unsupervised image fusion network,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 44, no. 1, pp. 502–518, 2022. doi: 10.1109/TPAMI.2020.3012548
|
[46] |
J. Liu, Z. Liu, G. Wu, L. Ma, R. Liu, W. Zhong, Z. Luo, and X. Fan, “Multi-interactive feature learning and a full-time multi-modality benchmark for image fusion and segmentation,” in Proc. the IEEE/CVF Int. Conf. on Computer Vision, 2023, pp. 8115–8124.
|
[47] |
H. Li, T. Xu, X.-J. Wu, J. Lu, and J. Kittler, “LRRNet: A novel representation learning guided fusion network for infrared and visible images,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 45, no. 9, pp. 11040–11052, 2023. doi: 10.1109/TPAMI.2023.3268209
|
[48] |
Y.-J. Rao, “In-fibre bragg grating sensors,” Measurement Science and Technology, vol. 8, no. 4, p. 355, 1997. doi: 10.1088/0957-0233/8/4/002
|
[49] |
G. Cui, H. Feng, Z. Xu, Q. Li, and Y. Chen, “Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition,” Optics Communications, vol. 341, pp. 199–209, 2015. doi: 10.1016/j.optcom.2014.12.032
|
[50] |
J. W. Roberts, J. A. Van Aardt, and F. B. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” Journal of Applied Remote Sensing, vol. 2, no. 1, p. 023522, 2008. doi: 10.1117/1.2945910
|
[51] |
Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Information Fusion, vol. 14, no. 2, pp. 127–135, 2013. doi: 10.1016/j.inffus.2011.08.002
|
[52] |
A. M. Eskicioglu and P. S. Fisher, “Image quality measures and their performance,” IEEE Trans. Communications, vol. 43, no. 12, pp. 2959–2965, 1995. doi: 10.1109/26.477498
|
[53] |
M. Haghighat and M. A. Razian, “Fast-fmi: non-reference image fusion metric,” in Proc. the Int. Conf. on Application of Information and Communication Technologies, 2014, pp. 1–3.
|
[54] |
C. O. Ancuti, C. Ancuti, and R. Timofte, “NH-HAZE: An image dehazing benchmark with non-homogeneous hazy and haze-free images,” in Proc. the IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops, 2020, pp. 444–445.
|
[55] |
Y. Zheng, J. Zhan, S. He, J. Dong, and Y. Du, “Curricular contrastive regularization for physics-aware single image dehazing,” in Proc. the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2023, pp. 5785–5794.
|
[56] |
Y. Qiu, K. Zhang, C. Wang, W. Luo, H. Li, and Z. Jin, “MB-TaylorFormer: Multi-branch efficient transformer expanded by taylor formula for image dehazing,” in Proc. the IEEE/CVF Int. Conf. on Computer Vision, 2023, pp. 12802–12813.
|
[57] |
R.-Q. Wu, Z.-P. Duan, C.-L. Guo, Z. Chai, and C. Li, “RIDCP: Revitalizing real image dehazing via high-quality codebook priors,” in Proc. the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2023, pp. 22282–22291.
|
[58] |
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. the IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 779–788.
|