A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 11 Issue 3
Mar.  2024

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
H. Xu, J. Ma, Y. Yuan, H. Zhang, and X. Guo, “More than lightening: A self-supervised low-light image enhancement method capable for multiple degradations”, IEEE/CAA J. Autom. Sinica, vol. 11, no. 3, pp. 622–637, Mar. 2024. doi: 10.1109/JAS.2024.124263
Citation: H. Xu, J. Ma, Y. Yuan, H. Zhang, and X. Guo, “More than lightening: A self-supervised low-light image enhancement method capable for multiple degradations”, IEEE/CAA J. Autom. Sinica, vol. 11, no. 3, pp. 622–637, Mar. 2024. doi: 10.1109/JAS.2024.124263

More Than Lightening: A Self-Supervised Low-Light Image Enhancement Method Capable for Multiple Degradations

doi: 10.1109/JAS.2024.124263
Funds:

the National Natural Science Foundation of China 62276192

More Information
  • Low-light images suffer from low quality due to poor lighting conditions, noise pollution, and improper settings of cameras. To enhance low-light images, most existing methods rely on normal-light images for guidance but the collection of suitable normal-light images is difficult. In contrast, a self-supervised method breaks free from the reliance on normal-light data, resulting in more convenience and better generalization. Existing self-supervised methods primarily focus on illumination adjustment and design pixel-based adjustment methods, resulting in remnants of other degradations, uneven brightness and artifacts. In response, this paper proposes a self-supervised enhancement method, termed as SLIE. It can handle multiple degradations including illumination attenuation, noise pollution, and color shift, all in a self-supervised manner. Illumination attenuation is estimated based on physical principles and local neighborhood information. The removal and correction of noise and color shift removal are solely realized with noisy images and images with color shifts. Finally, the comprehensive and fully self-supervised approach can achieve better adaptability and generalization. It is applicable to various low light conditions, and can reproduce the original color of scenes in natural light. Extensive experiments conducted on four public datasets demonstrate the superiority of SLIE to thirteen state-of-the-art methods. Our code is available at

    https://github.com/hanna-xu/SLIE

    .

     

  • loading
  • [1]
    M. Fan, W. Wang, W. Yang, and J. Liu, “Integrating semantic segmentation and retinex model for low-light image enhancement,” in Proc. ACM Int. Conf. Multimedia, 2020, pp. 2317–2325.
    [2]
    L. Tang, Y. Deng, Y. Ma, J. Huang, and J. Ma, “Superfusion: A versatile image registration and fusion network with semantic awareness,” EEE/CAA J. Autom. Sinica, vol. 9, no. 12, pp. 2121–2137, 2022. doi: 10.1109/JAS.2022.106082
    [3]
    Y. Ma, X. Wang, W. Gao, Y. Du, J. Huang, and F. Fan, “Progressive fusion network based on infrared light field equipment for infrared image enhancement,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 9, pp. 1687–1690, 2022. doi: 10.1109/JAS.2022.105812
    [4]
    K. Wu, J. Huang, Y. Ma, F. Fan, and J. Ma, “Cycle-retinex: Unpaired low-light image enhancement via retinex-inline cyclegan,” IEEE Trans. Multimedia, vol. 26, pp. 1213–1228, 2023.
    [5]
    C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2018, pp. 3291–3300.
    [6]
    J. Cai, S. Gu, and L. Zhang, “Learning a deep single image contrast enhancer from multi-exposure images,” IEEE Trans. Image Processing, vol. 27, no. 4, pp. 2049–2062, 2018. doi: 10.1109/TIP.2018.2794218
    [7]
    C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” in Proc. British Machine Vision Conf. (BMVC), 2018, pp. 1–12.
    [8]
    Y. Zhang, J. Zhang, and X. Guo, “Kindling the darkness: A practical low-light image enhancer,” in Proc. ACM Int. Conf. Multimedia, 2019, pp. 1632–1640.
    [9]
    Y. Zhang, X. Guo, J. Ma, W. Liu, and J. Zhang, “Beyond brightening low-light images,” Int. J. Computer Vision, vol. 129, no. 4, pp. 1013–1037, 2021. doi: 10.1007/s11263-020-01407-x
    [10]
    Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang, “Enlightengan: Deep light enhancement without paired supervision,” IEEE Trans. Image Processing, vol. 30, pp. 2340–2349, 2021. doi: 10.1109/TIP.2021.3051462
    [11]
    Y. Qu, K. Chen, C. Liu, and Y. Ou, “Umle: Unsupervised multi-discriminator network for low light enhancement,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA), 2021, pp. 4318–4324.
    [12]
    C. Guo, C. Li, J. Guo, C. C. Loy, J. Hou, S. Kwong, and R. Cong, “Zero-reference deep curve estimation for low-light image enhancement,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2020, pp. 1780–1789.
    [13]
    W. Wu, W. Wang, K. Jiang, X. Xu, and R. Hu, “Self-supervised learning on a lightweight low-light image enhancement model with curve refinement,” in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 1890–1894.
    [14]
    P. Garg, M. Mandal, and P. Narang, “Improving aerial instance segmentation in the dark with self-supervised low light enhancement,” arXiv preprint arXiv: 2102.05399, 2021.
    [15]
    J. Liang, Y. Xu, Y. Quan, B. Shi, and H. Ji, “Self-supervised low-light image enhancement using discrepant untrained network priors,” IEEE Trans. Circuits and Systems for Video Technology, vol. 32, no. 11, pp. 7332–7345, 2022. doi: 10.1109/TCSVT.2022.3181781
    [16]
    H.-D. Cheng and X. Shi, “A simple and effective histogram equalization approach to image enhancement,” Digital Signal Processing, vol. 14, no. 2, pp. 158–170, 2004. doi: 10.1016/j.dsp.2003.07.002
    [17]
    C. Lee, C. Lee, and C.-S. Kim, “Contrast enhancement based on layered difference representation of 2d histograms,” IEEE Trans. Image Processing, vol. 22, no. 12, pp. 5372–5384, 2013. doi: 10.1109/TIP.2013.2284059
    [18]
    K. Singh, R. Kapoor, and S. K. Sinha, “Enhancement of low exposure images via recursive histogram equalization algorithms,” Optik, vol. 126, no. 20, pp. 2619–2625, 2015. doi: 10.1016/j.ijleo.2015.06.060
    [19]
    D. J. Jobson, Z.-u. Rahman, and G. A. Woodell, “Properties and performance of a center/surround retinex,” IEEE Trans. Image Processing, vol. 6, no. 3, pp. 451–462, 1997. doi: 10.1109/83.557356
    [20]
    X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2782–2790.
    [21]
    B. Cai, X. Xu, K. Guo, K. Jia, B. Hu, and D. Tao, “A joint intrinsic-extrinsic prior model for retinex,” in Proc. IEEE Int. Conf. Computer Vision (ICCV), 2017, pp. 4000–4009.
    [22]
    R. Wang, Q. Zhang, C.-W. Fu, X. Shen, W.-S. Zheng, and J. Jia, “Underexposed photo enhancement using deep illumination estimation,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2019, pp. 6849–6857.
    [23]
    C. Li, J. Guo, F. Porikli, and Y. Pang, “Lightennet: A convolutional neural network for weakly illuminated image enhancement,” Pattern Recognition Letters, vol. 104, pp. 15–22, 2018. doi: 10.1016/j.patrec.2018.01.010
    [24]
    J. Wang, W. Tan, X. Niu, and B. Yan, “Rdgan: Retinex decomposition based adversarial learning for low-light enhancement,” in Proc. IEEE Int. Conf. Multimedia and Expo (ICME), 2019, pp. 1186–1191.
    [25]
    W. Ren, S. Liu, L. Ma, Q. Xu, X. Xu, X. Cao, J. Du, and M.-H. Yang, “Low-light image enhancement via a deep hybrid network,” IEEE Trans. Image Processing, vol. 28, no. 9, pp. 4364–4375, 2019. doi: 10.1109/TIP.2019.2910412
    [26]
    Q. Zhao, X. Zhang, H. Tang, C. Gu, and S. Zhu, “Enlighten-anything: When segment anything model meets low-light image enhancement,” arXiv preprint arXiv: 2306.10286, 2023.
    [27]
    F. Cao and K. Li, “A new method for image super-resolution with multi-channel constraints,” Knowledge-Based Systems, vol. 146, pp. 118–128, 2018. doi: 10.1016/j.knosys.2018.01.034
    [28]
    K. Li and F. Cao, “Super-resolution using neighbourhood regression with local structure prior,” Signal Processing: Image Communication, vol. 72, pp. 58–68, 2019. doi: 10.1016/j.image.2018.12.006
    [29]
    E. H. Land, “The retinex theory of color vision,” Scientific American, vol. 237, no. 6, pp. 108–129, 1977. doi: 10.1038/scientificamerican1277-108
    [30]
    M. Afifi and M. S. Brown, “Deep white-balance editing,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2020, pp. 1397–1406.
    [31]
    F. Kınlı, D. Yılmaz, B. Özcan, and F. Kıraç, “Modeling the lighting in scenes as style for auto white-balance correction,” in Proc. IEEE/CVF Winter Conf. Applications of Computer Vision (WACV), 2023, pp. 4903–4913.
    [32]
    M. Afifi, M. A. Brubaker, and M. S. Brown, “Auto white-balance correction for mixed-illuminant scenes,” in Proc. IEEE/CVF Winter Conf. Applications of Computer Vision (WACV), 2022, pp. 1210–1219.
    [33]
    F. Kınlı, D. Yılmaz, B. Özcan, and F. Kıraç, “Deterministic neural illumination mapping for efficient auto-white balance correction,” in Proc. IEEE/CVF Int. Conf. Computer Vision (ICCV), 2023, pp. 1139–1147.
    [34]
    S. Zini, C. Rota, M. Buzzelli, S. Bianco, and R. Schettini, “Back to the future: a night photography rendering isp without deep learning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2023, pp. 1465–1473.
    [35]
    E. Ershov, A. Savchik, D. Shepelev, N. Banić, M. S. Brown, R. Timofte, K. Koščević, M. Freeman, V. Tesalin, D. Bocharov et al., “Ntire 2022 challenge on night photography rendering,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2022, pp. 1287–1300.
    [36]
    A. Shutova, E. Ershov, G. Perevozchikov, I. Ermakov, N. Banić, R. Timofte, R. Collins, M. Efimova, A. Terekhin, S. Zini et al., “Ntire 2023 challenge on night photography rendering,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2023, pp. 1981–1992.
    [37]
    Q. Yang, C. Jung, Q. Fu, and H. Song, “Low light image denoising based on poisson noise model and weighted tv regularization,” in Proc. IEEE Int. Conf. Image Processing (ICIP), 2018, pp. 3199–3203.
    [38]
    J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” in Proc. Int. Conf. Machine Learning (ICML), 2018, pp. 2965–2974.
    [39]
    O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2015, pp. 234–241.
    [40]
    S. Bianco and C. Cusano, “Quasi-unsupervised color constancy,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2019, pp. 12 212–12 221.
    [41]
    X. Guo, Y. Li, and H. Ling, “Lime: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Processing, vol. 26, no. 2, pp. 982–993, 2016.
    [42]
    M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structure-revealing low-light image enhancement via robust retinex model,” IEEE Trans. Image Processing, vol. 27, no. 6, pp. 2828–2841, 2018. doi: 10.1109/TIP.2018.2810539
    [43]
    L. Ma, R. Liu, J. Zhang, X. Fan, and Z. Luo, “Learning deep context-sensitive decomposition for low-light image enhancement,” IEEE Trans. Neural Networks and Learning Systems, vol. 33, no. 10, pp. 5666–5680, 2022. doi: 10.1109/TNNLS.2021.3071245
    [44]
    R. Liu, L. Ma, J. Zhang, X. Fan, and Z. Luo, “Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2021, pp. 10 561–10 570.
    [45]
    X. Yi, H. Xu, H. Zhang, L. Tang, and J. Ma, “Diff-retinex: Rethinking low-light image enhancement with a generative diffusion model,” in Proc. IEEE/CVF Int. Conf. Computer Vision (ICCV), 2023, pp. 12 302–12 311.
    [46]
    Z. Liang, C. Li, S. Zhou, R. Feng, and C. C. Loy, “Iterative prompt learning for unsupervised backlit image enhancement,” in Proc. IEEE/CVF Int. Conf. Computer Vision (CVPR), 2023, pp. 8094–8103.
    [47]
    S. Yang, M. Ding, Y. Wu, Z. Li, and J. Zhang, “Implicit neural representation for cooperative low-light image enhancement,” in Proc. IEEE/CVF Int. Conf. Computer Vision (CVPR), 2023, pp. 12 918–12 927.
    [48]
    F. Lv, Y. Li, and F. Lu, “Attention guided low-light image enhancement with a large scale low-light simulation dataset,” Int. J. Computer Vision, vol. 129, no. 7, pp. 2175–2193, 2021. doi: 10.1007/s11263-021-01466-8
    [49]
    K. Ma, K. Zeng, and Z. Wang, “Perceptual quality assessment for multi-exposure image fusion,” IEEE Trans. Image Processing, vol. 24, no. 11, pp. 3345–3356, 2015. doi: 10.1109/TIP.2015.2442920
    [50]
    J. Hai, Z. Xuan, R. Yang, Y. Hao, F. Zou, F. Lin, and S. Han, “R2rnet: Low-light image enhancement via real-low to real-normal network,” J. Visual Communication and Image Representation, vol. 90, p. 103712, 2023. doi: 10.1016/j.jvcir.2022.103712
    [51]
    Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Information Fusion, vol. 14, no. 2, pp. 127–135, 2013. doi: 10.1016/j.inffus.2011.08.002
    [52]
    S. Wang, J. Zheng, H.-M. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Trans. Image Processing, vol. 22, no. 9, pp. 3538–3548, 2013. doi: 10.1109/TIP.2013.2261309
    [53]
    A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Processing Letters, vol. 20, no. 3, pp. 209–212, 2012.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(20)  / Tables(5)

    Article Metrics

    Article views (239) PDF downloads(44) Cited by()

    Highlights

    • Our method addresses multiple degradations, including illumination, noise, and color shift
    • A network is designed to adjust low illumination in a self-supervised manner
    • A color correction block is designed, breaking free from reliance on white balanced images
    • The self-supervised manner improves the generalization for various low-light conditions
    • It achieves a balance between parameters and performance

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return