A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 9 Issue 8
Aug.  2022

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 7.847, Top 10% (SCI Q1)
    CiteScore: 13.0, Top 5% (Q1)
    Google Scholar h5-index: 64, TOP 7
Turn off MathJax
Article Contents
Y. Liu, Y. Shi, F. H. Mu, J. Cheng, and X. Chen, “Glioma segmentation-oriented multi-modal MR image fusion with adversarial learning,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 8, pp. 1528–1531, Aug. 2022. doi: 10.1109/JAS.2022.105770
Citation: Y. Liu, Y. Shi, F. H. Mu, J. Cheng, and X. Chen, “Glioma segmentation-oriented multi-modal MR image fusion with adversarial learning,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 8, pp. 1528–1531, Aug. 2022. doi: 10.1109/JAS.2022.105770

Glioma Segmentation-Oriented Multi-Modal MR Image Fusion With Adversarial Learning

doi: 10.1109/JAS.2022.105770
More Information
  • loading
  • [1]
    S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: A survey of the state of the art,” Inf. Fusion, vol. 33, pp. 100–112, 2017. doi: 10.1016/j.inffus.2016.05.004
    [2]
    X. Qu, J. Yan, H. Xiao, and Z. Zhu, “Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain,” Acta Automatic Sinica, vol. 34, no. 12, pp. 1508–1514, 2008. doi: 10.1016/S1874-1029(08)60174-3
    [3]
    J. Du, W. Li, and B. Xiao, “Anatomical-functional image fusion by information of interest in local laplacian filtering domain,” IEEE Trans. Image Process., vol. 26, no. 12, pp. 5855–5866, 2017. doi: 10.1109/TIP.2017.2745202
    [4]
    M. Yin, X. Liu, Y. Liu, and X. Chen, “Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain,” IEEE Trans. Instrum. Meas., vol. 68, no. 1, pp. 49–64, 2019. doi: 10.1109/TIM.2018.2838778
    [5]
    B. Yang and S. Li, “Pixel-level image fusion with simultaneous orthogonal matching pursuit,” Inf. Fusion, vol. 13, no. 1, pp. 10–19, 2012. doi: 10.1016/j.inffus.2010.04.001
    [6]
    Y. Liu, X. Chen, R. K. Ward, and Z. Jane Wang, “Image fusion with convolutional sparse representation,” IEEE Signal Process. Lett., vol. 23, no. 12, pp. 1882–1886, 2016. doi: 10.1109/LSP.2016.2618776
    [7]
    Y. Liu, X. Chen, R. K. Ward, and Z. J. Wang, “Medical image fusion via convolutional sparsity based morphological component analysis,” IEEE Signal Process. Lett., vol. 26, no. 3, pp. 485–489, 2019. doi: 10.1109/LSP.2019.2895749
    [8]
    S. Li and B. Yang, “Hybrid multiresolution method for multisensor multimodal image fusion,” IEEE Sensors J., vol. 10, no. 9, pp. 1519–1526, 2010. doi: 10.1109/JSEN.2010.2041924
    [9]
    Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inf. Fusion, vol. 24, pp. 147–164, 2015. doi: 10.1016/j.inffus.2014.09.004
    [10]
    W. Zhao and H. Lu, “Medical image fusion and denoising with alternating sequential filter and adaptive fractional order total variation,” IEEE Trans. Instrum. Meas., vol. 66, no. 9, pp. 2283–2294, 2017. doi: 10.1109/TIM.2017.2700198
    [11]
    G. Bhatnagar and Q. M. J. Wu, “A fractal dimension based framework for night vision fusion,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 1, pp. 220–227, 2019. doi: 10.1109/JAS.2018.7511102
    [12]
    Y. Liu, X. Chen, Z. Wang, Z. Wang, R. Ward, and X. Wang, “Deep learning for pixel-level image fusion: Recent advances and future prospects,” Inf. Fusion, vol. 42, pp. 158–173, 2018. doi: 10.1016/j.inffus.2017.10.007
    [13]
    H. Zhang, H. Xu, X. Tian, J. Jiang, and J. Ma, “Image fusion meets deep learning: A survey and perspective,” Inf. Fusion, vol. 76, pp. 323–336, 2021. doi: 10.1016/j.inffus.2021.06.008
    [14]
    Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, “IFCNN: A general image fusion framework based on convolutional neural network,” Inf. Fusion, vol. 54, pp. 99–118, 2020. doi: 10.1016/j.inffus.2019.07.011
    [15]
    H. Xu and J. Ma, “EMFusion: An unsupervised enhanced medical image fusion network,” Inf. Fusion, vol. 76, pp. 177–186, 2021. doi: 10.1016/j.inffus.2021.06.001
    [16]
    J. Ma, H. Xu, J. Jiang, X. Mei, and X.-P. Zhang, “DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion,” IEEE Trans. Image Process., vol. 29, pp. 4980–4995, 2020. doi: 10.1109/TIP.2020.2977573
    [17]
    H. Xu, J. Ma, J. Jiang, X. Guo, and H. Ling, “U2Fusion: A unified unsupervised image fusion network,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, pp. 502–518, 2022. doi: 10.1109/TPAMI.2020.3012548
    [18]
    J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Inf. Fusion, vol. 48, pp. 11–26, 2019. doi: 10.1016/j.inffus.2018.09.004
    [19]
    O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. MICCAI, pp. 231–241, 2015.
    [20]
    X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, “Least squares generative adversarial networks,” in Proc. IEEE Int. Conf. Computer Vision, 2017, pp. 2794–2802.
    [21]
    Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004. doi: 10.1109/TIP.2003.819861
    [22]
    J. J. Lewis, R. J. O’Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel-based and region-based image fusion with complex wavelets,” Inf. Fusion, vol. 8, no. 2, pp. 119–130, 2007. doi: 10.1016/j.inffus.2005.09.006
    [23]
    Y. Liu, X. Chen, J. Cheng, and H. Peng, “A medical image fusion method based on convolutional neural networks,” in Proc. Int. Conf. Inf. Fusion, 2017, pp. 1–7.
    [24]
    Z. Liu, E. Blasch, Z. Xue, J. Zhao, R. Laganire, and W. Wu, “Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: A comparative study,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 1, pp. 94–109, 2012. doi: 10.1109/TPAMI.2011.109
    [25]
    N. Cvejic, C. Canagarajah, and D. Bull, “Image fusion metric based on mutual information and Tsallis entropy,” Electronics Lett., vol. 42, no. 11, pp. 626–627, 2006. doi: 10.1049/el:20060693
    [26]
    C. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electronics Lett., vol. 36, no. 4, pp. 308–309, 2000. doi: 10.1049/el:20000267
    [27]
    G. Piella and H. Heijmans, “A new quality metric for image fusion,” in Proc. IEEE Int. Conf. Image Processing, 2003, pp. 173–176.
    [28]
    Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion, vol. 14, no. 2, pp. 127–135, 2013. doi: 10.1016/j.inffus.2011.08.002

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(5)  / Tables(2)

    Article Metrics

    Article views (99) PDF downloads(24) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return