IEEE/CAA Journal of Automatica Sinica
Citation: | J. Chen, K. L. Wu, Y. Yu, and L. B. Luo, “CDP-GAN: Near-infrared and visible image fusion via color distribution preserved GAN,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 9, pp. 1698–1701, Sept. 2022. doi: 10.1109/JAS.2022.105818 |
[1] |
W. Wang and X. Yuan, “Recent advances in image dehazing,” IEEE/CAA J. Autom. Sinica, vol. 4, no. 3, pp. 410–436, 2017. doi: 10.1109/JAS.2017.7510532
|
[2] |
H. Zhang, H. Xu, X. Tian, J. Jiang, and J. Ma, “Image fusion meets deep learning: A survey and perspective,” Information Fusion, vol. 76, pp. 323–336, 2021. doi: 10.1016/j.inffus.2021.06.008
|
[3] |
J. Ma, Y. Wei, P. Liang, L. Chang, and J. Jiang, “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Information Fusion, vol. 48, pp. 11–26, 2019. doi: 10.1016/j.inffus.2018.09.004
|
[4] |
H. Zhang, J. Yuan, X. Tian, and J. Ma, “GAN-FM: Infrared and visible image fusion using GAN with full-scale skip connection and dual markovian discriminators,” IEEE Trans. Computational Imaging, vol. 7, pp. 1134–1147, 2021. doi: 10.1109/TCI.2021.3119954
|
[5] |
Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, “IFCNN: A general image fusion framework based on convolutional neural network,” Information Fusion, vol. 54, pp. 99–118, 2020. doi: 10.1016/j.inffus.2019.07.011
|
[6] |
H. Xu, J. Ma, J. Jiang, X. Guo, and H. Ling, “U2fusion: A unified unsupervised image fusion network,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 44, no. 1, pp. 502–518, 2022. doi: 10.1109/TPAMI.2020.3012548
|
[7] |
Q. Lian, W. Yan, X. Zhang, and S. Chen, “Single image rain removal using image decomposition and a dense network,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 6, pp. 1428–1437, 2019.
|
[8] |
J. Ma, H. Zhang, Z. Shao, P. Liang, and H. Xu, “GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion,” IEEE Trans. Instrumentation and Measurement, vol. 70, pp. 1–14, 2021.
|
[9] |
A. V. Vanmali and V. M. Gadre, “Visible and NIR image fusion using weight-map-guided Laplacian-Gaussian pyramid for improving scene visibility,” Sadhana, vol. 42, no. 7, pp. 1063–1082, 2017. doi: 10.1007/s12046-017-0673-1
|
[10] |
Q. Yan, X. Shen, L. Xu, S. Zhuo, X. Zhang, L. Shen, and J. Jia, “Cross-field joint image restoration via scale map,” in Proc. IEEE Int. Conf. Computer Vision, 2013, pp. 1537−1544.
|
[11] |
A. V. Vanmali, S. G. Kelkar, and V. M. Gadre, “A novel approach for image dehazing combining visible-NIR images,” in Proc. National Conf. Computer Vision, Pattern Recognition, Image Processing and Graphics, 2015, pp. 1–4.
|
[12] |
L. Schaul, C. Fredembach, and S. Süsstrunk, “Color image dehazing using the near-infrared,” in Proc. IEEE Int. Conf. Image Processing, 2009, pp. 1629–1632.
|
[13] |
S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Processing, vol. 22, no. 7, pp. 2864–2875, 2013. doi: 10.1109/TIP.2013.2244222
|
[14] |
H. Zhang and J. Ma, “SDNet: A versatile squeeze-and-decomposition network for real-time image fusion,” Int. J. Computer Vision, vol. 129, pp. 2761–2785, 2021. doi: 10.1007/s11263-021-01501-8
|
[15] |
P. Dai, Z. Li, Y. Zhang, S. Liu, and B. Zeng, “PBR-Net: Imitating physically based rendering using deep neural network,” IEEE Trans. Image Processing, vol. 29, pp. 5980–5992, 2020. doi: 10.1109/TIP.2020.2987169
|