A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 9 Issue 12
Dec.  2022

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Q. Kong, H. B. Zhou, and Y. T. Wu, “NormFuse: Infrared and visible image fusion with pixel-adaptive normalization,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 12, pp. 2190–2192, Dec. 2022. doi: 10.1109/JAS.2022.106112
Citation: Q. Kong, H. B. Zhou, and Y. T. Wu, “NormFuse: Infrared and visible image fusion with pixel-adaptive normalization,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 12, pp. 2190–2192, Dec. 2022. doi: 10.1109/JAS.2022.106112

NormFuse: Infrared and Visible Image Fusion With Pixel-Adaptive Normalization

doi: 10.1109/JAS.2022.106112
More Information
  • loading
  • [1]
    Z. Zhou, W. Bo, L. Sun, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Inform. Fusion, vol. 30, pp. 15–26, 2016. doi: 10.1016/j.inffus.2015.11.003
    [2]
    Z. Zhou, M. Dong, X. Xie, and Z. Gao, “Fusion of infrared and visible images for night-vision context enhancement,” Appl. Opt., vol. 55, pp. 6480–6490, 2016. doi: 10.1364/AO.55.006480
    [3]
    H. Li and X. Wu, “DenseFuse: A fusion approach to infrared and visible images,” IEEE Trans. Image Process., vol. 28, no. 5, pp. 2614–2623, 2018.
    [4]
    J. Ma, W. Yu, Liang, C. Li, and J. Jiang, “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Inform. Fusion, vol. 48, pp. 11–26, 2019. doi: 10.1016/j.inffus.2018.09.004
    [5]
    Z. Zhao, S. Xu, C. Zhang, J. Liu, J. Zhang, and P. Li, “DIDFuse: Deep image decomposition for infrared and visible image fusion,” in Proc. IJCAI, 2020, pp. 970–976.
    [6]
    H. Xu, X. Wang, and J. Ma., “DRF: Distengled representation for visible and infrared image fusion,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1–13, 2021.
    [7]
    L. Tang, J. Yuan, and J. Ma., “Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network,” Inform. Fusion, vol. 82, pp. 28–42, 2022. doi: 10.1016/j.inffus.2021.12.004
    [8]
    J. Ma, L. Tang, F. Fan, J. Huang, X. Mei, and Y. Ma, “SwinFusion: Cross-domain long-range learning for general image fusion via swin transformer,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 7, pp. 1200–1217, 2022.
    [9]
    X. Huang and S. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in Proc. ICCV, 2017, pp. 1510–1519.
    [10]
    T. Park, M. Liu, T. Wang, and J. Zhu, “Semantic image synthesis with spatially-adaptive normalization,” in Proc. CVPR, 2019, pp. 2332–2341.
    [11]
    A. Toet and M. Hogervorst, “Progress in color night vision,” Opt. Eng., vol. 51, no. 1, pp. 1–20, 2012.
    [12]
    H. Xu, J. Ma, Z. Le, and X. Guo, “FusionDN: A unified densely connected network for image fusion,” in Proc. AAAI, vol. 34, no. 7, pp. 12484–12491, 2020.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(7)  / Tables(1)

    Article Metrics

    Article views (276) PDF downloads(34) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return