A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 9 Issue 9
Sep.  2022

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
G. C. Zhang, R. C. Nie, and J. D. Cao, “SSL-WAEIE: Self-supervised learning with weighted auto-encoding and information exchange for infrared and visible image fusion,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 9, pp. 1694–1697, Sept. 2022. doi: 10.1109/JAS.2022.105815
Citation: G. C. Zhang, R. C. Nie, and J. D. Cao, “SSL-WAEIE: Self-supervised learning with weighted auto-encoding and information exchange for infrared and visible image fusion,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 9, pp. 1694–1697, Sept. 2022. doi: 10.1109/JAS.2022.105815

SSL-WAEIE: Self-Supervised Learning With Weighted Auto-Encoding and Information Exchange for Infrared and Visible Image Fusion

doi: 10.1109/JAS.2022.105815
More Information
  • loading
  • [1]
    G. Bhatnagar and Q. J. Wu, “A fractal dimension based framework for night vision fusion,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 1, pp. 220–227, 2018.
    [2]
    L. Tang, J. Yuan, and J. Ma, “Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network,” Information Fusion, vol. 82, pp. 28–42, 2022. doi: 10.1016/j.inffus.2021.12.004
    [3]
    S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Processing, vol. 22, no. 7, pp. 2864–2875, 2013. doi: 10.1109/TIP.2013.2244222
    [4]
    Z. Zhao, S. Xu, C. Zhang, J. Liu, and J. Zhang, “Bayesian fusion for infrared and visible images,” Signal Processing, vol. 177, p. 107734, 2020.
    [5]
    J. Chen, X. Li, L. Luo, X. Mei, and J. Ma, “Infrared and visible image fusion based on target-enhanced multiscale transform decomposition,” Information Sciences, vol. 508, pp. 64–78, 2020. doi: 10.1016/j.ins.2019.08.066
    [6]
    Z. Zhou, W. Bo, L. Sun, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Information Fusion, vol. 30, pp. 15–26, 2016.
    [7]
    H. Li, X.-J. Wu, and J. Kittler, “MDLatLRR: A novel decomposition method for infrared and visible image fusion,” IEEE Trans. Image Processing, vol. 29, pp. 4733–4746, 2020. doi: 10.1109/TIP.2020.2975984
    [8]
    Y. Liu, X. Chen, Z. Wang, Z. J. Wang, R. K. Ward, and X. Wang, “Deep learning for pixel-level image fusion: Recent advances and future prospects,” Information Fusion, vol. 42, pp. 158–173, 2018. doi: 10.1016/j.inffus.2017.10.007
    [9]
    Y. Liu, L. Wang, J. Cheng, C. Li, and X. Chen, “Multi-focus image fusion: A survey of the state of the art,” Information Fusion, vol. 64, pp. 71–91, 2020. doi: 10.1016/j.inffus.2020.06.013
    [10]
    X. Guo, R. Nie, J. Cao, D. Zhou, L. Mei, and K. He, “FuseGAN: Learning to fuse multi-focus image via conditional generative adversarial network,” IEEE Trans. Multimedia, vol. 21, no. 8, pp. 1982–1996, 2019. doi: 10.1109/TMM.2019.2895292
    [11]
    J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Information Fusion, vol. 48, pp. 11–26, 2019. doi: 10.1016/j.inffus.2018.09.004
    [12]
    H. Li and X.-J. Wu, “DenseFuse: A fusion approach to infrared and visible images,” IEEE Trans. Image Processing, vol. 28, no. 5, pp. 2614–2623, 2018.
    [13]
    H. Li, X.-J. Wu, and J. Kittler, “RFN-Nest: An end-to-end residual fusion network for infrared and visible images,” Information Fusion, vol. 73, pp. 72–86, 2021. doi: 10.1016/j.inffus.2021.02.023
    [14]
    Z. Zhao, S. Xu, C. Zhang, J. Liu, P. Li, and J. Zhang, “DIDFuse: Deep image decomposition for infrared and visible image fusion,” arXiv preprint arXiv: 2003.09210, 2020.
    [15]
    Y. Fu and X.-J. Wu, “A dual-branch network for infrared and visible image fusion,” in Proc. 25th Int. Conf. Pattern Recognition, IEEE, 2021, pp. 10675–10680.
    [16]
    K. Ram Prabhakar, V. Sai Srikar, and R. Venkatesh Babu, “DeepFuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs,” in Proc. IEEE Int. Conf. Computer Vision, 2017, pp. 4714–4722.
    [17]
    L. Qu, S. Liu, M. Wang, S. Li, S. Yin, Q. Qiao, and Z. Song, “TransFuse: A unified transformer-based image fusion framework using self-supervised learning,” arXiv preprint arXiv: 2201.07451, 2022.
    [18]
    F. Zhao, W. Zhao, L. Yao, and Y. Liu, “Self-supervised feature adaption for infrared and visible image fusion,” Information Fusion, vol. 76, pp. 189–203, 2021. doi: 10.1016/j.inffus.2021.06.002
    [19]
    R. Nie, J. Cao, D. Zhou, and W. Qian, “Multi-source information exchange encoding with PCNN for medical image fusion,” IEEE Trans. Circuits and Systems for Video Technology, vol. 31, no. 3, pp. 986–1000, 2020.
    [20]
    Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Tran. Image Proc., vol. 13, no. 4, pp. 600–612, 2004. doi: 10.1109/TIP.2003.819861
    [21]
    H. Xu, J. Ma, Z. Le, J. Jiang, and X. Guo, “FusionDN: A unified densely connected network for image fusion,” in Proc. AAAI Conf. Artificial Intelligence, 2020, vol. 34, no. 07, pp. 12484–12491.
    [22]
    R. Nie, C. Ma, J. Cao, H. Ding, and D. Zhou, “A total variation with joint norms for infrared and visible image fusion,” IEEE Trans. Multimedia, vol. 24, pp. 1460–1472, 2021. doi: 10.1109/TMM.2021.3065496
    [23]
    C. Yang, J.-Q. Zhang, X.-R. Wang, and X. Liu, “A novel similarity based quality metric for image fusion,” Information Fusion, vol. 2, no. 9, pp. 156–160, 2008.
    [24]
    C. S. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electronics Letters, vol. 36, no. 4, pp. 308–309, 2000. doi: 10.1049/el:20000267
    [25]
    H. R. Sheikh, A. C. Bovik, and G. De Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Proc., vol. 14, no. 12, pp. 2117–2128, 2005. doi: 10.1109/TIP.2005.859389

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(4)  / Tables(4)

    Article Metrics

    Article views (261) PDF downloads(44) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return