A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 10 Issue 1
Jan.  2023

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
X. Y. Wang, J. Y. Ma, and J. J. Jiang, “Contrastive learning for blind super-resolution via a distortion-specific network,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 1, pp. 78–89, Jan. 2023. doi: 10.1109/JAS.2022.105914
Citation: X. Y. Wang, J. Y. Ma, and J. J. Jiang, “Contrastive learning for blind super-resolution via a distortion-specific network,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 1, pp. 78–89, Jan. 2023. doi: 10.1109/JAS.2022.105914

Contrastive Learning for Blind Super-Resolution via A Distortion-Specific Network

doi: 10.1109/JAS.2022.105914
Funds:  This work was supported by the National Natural Science Foundation of China (61971165), and the Key Research and Development Program of Hubei Province (2020BAB113)
More Information
  • Previous deep learning-based super-resolution (SR) methods rely on the assumption that the degradation process is predefined (e.g., bicubic downsampling). Thus, their performance would suffer from deterioration if the real degradation is not consistent with the assumption. To deal with real-world scenarios, existing blind SR methods are committed to estimating both the degradation and the super-resolved image with an extra loss or iterative scheme. However, degradation estimation that requires more computation would result in limited SR performance due to the accumulated estimation errors. In this paper, we propose a contrastive regularization built upon contrastive learning to exploit both the information of blurry images and clear images as negative and positive samples, respectively. Contrastive regularization ensures that the restored image is pulled closer to the clear image and pushed far away from the blurry image in the representation space. Furthermore, instead of estimating the degradation, we extract global statistical prior information to capture the character of the distortion. Considering the coupling between the degradation and the low-resolution image, we embed the global prior into the distortion-specific SR network to make our method adaptive to the changes of distortions. We term our distortion-specific network with contrastive regularization as CRDNet. The extensive experiments on synthetic and real-world scenes demonstrate that our lightweight CRDNet surpasses state-of-the-art blind super-resolution approaches.

     

  • loading
  • [1]
    C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 2, pp. 295–307, 2015.
    [2]
    J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 1646–1654.
    [3]
    J. Kim, J. K. Lee, and K. M. Lee, “Deeply-recursive convolutional network for image superresolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 1637–1645.
    [4]
    Y. Tai, J. Yang, and X. Liu, “Image super-resolution via deep recursive residual network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 3147–3155.
    [5]
    B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit, 2017, pp. 136–144.
    [6]
    Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 2472–2481.
    [7]
    Z. Hui, X. Wang, and X. Gao, “Fast and accurate single image superresolution via information distillation network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 723–731.
    [8]
    T. Dai, J. Cai, Y. Zhang, S.-T. Xia, and L. Zhang, “Second-order attention network for single image super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 11065–11074.
    [9]
    N. Hao, H. Liao, Y. Qiu, and J. Yang, “Face super-resolution reconstruction and recognition using non-local similarity dictionary learning based algorithm,” IEEE/CAA J. Autom. Sinica, vol. 3, no. 2, pp. 213–224, 2016. doi: 10.1109/JAS.2016.7451109
    [10]
    L. Sun, Z. Liu, X. Sun, L. Liu, R. Lan, and X. Luo, “Lightweight image super-resolution via weighted multi-scale residual network,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 7, pp. 1271–1280, 2021. doi: 10.1109/JAS.2021.1004009
    [11]
    L. Geng, Z. Ji, Y. Yuan, and Y. Yin, “Fractional-order sparse representation for image denoising,” IEEE/CAA J. Autom. Sinica, vol. 5, no. 2, pp. 555–563, 2017.
    [12]
    J. Gu, H. Lu, W. Zuo, and C. Dong, “Blind super-resolution with iterative kernel correction,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 1604–1613.
    [13]
    S. Bell-Kligler, A. Shocher, and M. Irani, “Blind super-resolution kernel estimation using an internal-GAN,” arXiv preprint arXiv: 1909.06581, 2019.
    [14]
    Z. Luo, Y. Huang, S. Li, L. Wang, and T. Tan, “Unfolding the alternating optimization for blind super resolution,” arXiv preprint arXiv: 2010.02631, 2020.
    [15]
    A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Trans. Image Process., vol. 21, no. 12, pp. 4695–4708, 2012. doi: 10.1109/TIP.2012.2214050
    [16]
    Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image superresolution using very deep residual channel attention networks,” in Proc. Europ. Conf. Comput. Vis., 2018, pp. 286–301.
    [17]
    C. Ledig, L. Theis, F. Huszár, et al., “Photo-realistic single image super-resolution using a generative adversarial network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 4681–4690.
    [18]
    X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. Change Loy, “ESRGAN: Enhanced super-resolution generative adversarial networks,” in Proc. Europ. Conf. Comput. Vis. Workshop, 2018, pp. 1–16.
    [19]
    R. Lan, L. Sun, Z. Liu, H. Lu, Z. Su, C. Pang, and X. Luo, “Cascading and enhanced residual networks for accurate single-image superresolution,” IEEE Trans. Cybern., vol. 51, no. 1, pp. 115–125, 2021. doi: 10.1109/TCYB.2019.2952710
    [20]
    R. Lan, L. Sun, Z. Liu, H. Lu, C. Pang, and X. Luo, “MADNet: A fast and lightweight network for single-image super resolution,” IEEE Trans. Cybern., vol. 51, no. 3, pp. 1443–1453, 2021. doi: 10.1109/TCYB.2020.2970104
    [21]
    T. Köhler, M. Bätz, F. Naderi, A. Kaup, A. Maier, and C. Riess, “Toward bridging the simulated-to-real GAP: Benchmarking super-resolution on real data,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 11, pp. 2944–2959, 2019.
    [22]
    K. Zhang, W. Zuo, and L. Zhang, “Learning a single convolutional super-resolution network for multiple degradations,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 3262–3271.
    [23]
    A. Shocher, N. Cohen, and M. Irani, “Zero-shot” super-resolution using deep internal learning,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 3118–3126.
    [24]
    J. W. Soh, S. Cho, and N. I. Cho, “Meta-transfer learning for zero-shot super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2020, pp. 3516–3525.
    [25]
    K. Zhang, L. V. Gool, and R. Timofte, “Deep unfolding network for image super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2020, pp. 3217–3226.
    [26]
    S. A. Hussein, T. Tirer, and R. Giryes, “Correction filter for single image super-resolution: Robustifying off-the-shelf deep super-resolvers,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2020, pp. 1428–1437.
    [27]
    Y. Zhou, X. Du, M. Wang, S. Huo, Y. Zhang, and S.-Y. Kung, “Crossscale residual network: A general framework for image super-resolution, denoising, and deblocking,” IEEE Trans. Cybern., vol. 52, no. 7, pp. 5855–5867, 2022. doi: 10.1109/TCYB.2020.3044374
    [28]
    T. Michaeli and M. Irani, “Nonparametric blind super-resolution,” in Proc. IEEE Int. Conf. Comput. Vis., 2013, pp. 945–952.
    [29]
    J. Liang, K. Zhang, S. Gu, L. Van Gool, and R. Timofte, “Flow-based kernel prior with application to blind super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2021, pp. 10601–10610.
    [30]
    L. Wang, Y. Wang, X. Dong, Q. Xu, J. Yang, W. An, and Y. Guo, “Unsupervised degradation representation learning for blind super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2021, pp. 10581–10590.
    [31]
    D. L. Ruderman, “The statistics of natural images,” Network: Computation in Neural Systems, vol. 5, no. 4, p. 517, 1994. doi: 10.1088/0954-898X_5_4_006
    [32]
    A. Srivastava, A. B. Lee, E. Simoncelli, and S.-C. Zhu, “On advances in statistical modeling of natural images,” J. Mathematical Imaging and Vision, vol. 18, no. 1, pp. 17–33, 2003. doi: 10.1023/A:1021889010444
    [33]
    W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video superresolution using an efficient sub-pixel convolutional neural network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 1874–1883.
    [34]
    W. Wang, T. Zhou, F. Yu, J. Dai, E. Konukoglu, and L. Van Gool, “Exploring cross-image pixel contrast for semantic segmentation,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 7303–7313.
    [35]
    T. Zhou, W. Wang, E. Konukoglu, and L. Van Gool, “Rethinking semantic segmentation: A prototype view,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 2582–2593.
    [36]
    E. Agustsson and R. Timofte, “Ntire 2017 challenge on single image super-resolution: Dataset and study,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, 2017, pp. 126–135.
    [37]
    R. Timofte, E. Agustsson, L. Van Gool, M.-H. Yang, and L. Zhang, “Ntire 2017 challenge on single image super-resolution: Methods and results,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit, 2017, pp. 114–125.
    [38]
    M. Bevilacqua, A. Roumy, C. Guillemot, and M. line Alberi Morel, “Low-complexity single-image super-resolution based on nonnegative neighbor embedding,” in Proc. British Mach. Vis. Conf., 2012, pp. 1–10.
    [39]
    R. Zeyde, M. Elad, and M. Protter, “On single image scale-up using sparse-representations,” in Proc. Int. Conf. Curves and Surfaces, 2010, pp. 711–730.
    [40]
    D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proc. IEEE Int. Conf. Comput. Vis., 2001, pp. 416–423.
    [41]
    J.-B. Huang, A. Singh, and N. Ahuja, “Single image super-resolution from transformed self-exemplars,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2015, pp. 5197–5206.
    [42]
    Y. Matsui, K. Ito, Y. Aramaki, A. Fujimoto, T. Ogawa, T. Yamasaki, and K. Aizawa, “Sketch-based MANGA retrieval using MANGA109 dataset,” Multimed. Tools Appl., vol. 76, no. 20, pp. 21811–21838, 2017. doi: 10.1007/s11042-016-4020-z
    [43]
    K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process., vol. 26, no. 7, pp. 3142–3155, 2017. doi: 10.1109/TIP.2017.2662206
    [44]
    G. Chen, F. Zhu, and P. Ann Heng, “An efficient statistical method for image noise level estimation,” in Proc. IEEE Int. Conf. Comput. Vis., 2015, pp. 477–485.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(10)  / Tables(5)

    Article Metrics

    Article views (1132) PDF downloads(133) Cited by()

    Highlights

    • Our novel CRDNet realizes the best parameter-performance trade-off
    • We propose a novel contrastive regularization without extra computation for better SR
    • The extracted prior capturing degradation makes our network sensitive to distortion

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return