Project

Low-Light Image Enhancement with Multi-stage Residue Quantization and Brightness-aware Attention

  • Qian Ning, Fangfang Wu, Weisheng Dong, Xin Li and Guangming Shi
  • Figure 1. Architecture of the proposed network RQ-LLIE for low-light image enhancement (LLIE). Left: The architecture of the overall network. Right (a): The structure of the Basic block in the left figure. Right (b): The structure of the Brightness-aware attention in the left figure.

    Abstract

     Low-light image enhancement (LLIE) aims to recover il- lumination and improve the visibility of low-light images. Conventional LLIE methods often produce poor results be- cause they neglect the effect of noise interference. Deep learning-based LLIE methods focus on learning a map- ping function between low-light images and normal-light images that outperforms conventional LLIE methods. How- ever, most deep learning-based LLIE methods cannot yet fully exploit the guidance of auxiliary priors provided by normal-light images in the training dataset. In this paper, we propose a brightness-aware network with normal-light priors based on brightness-aware attention and residual- quantized codebook. To achieve a more natural and re- alistic enhancement, we design a query module to obtain more reliable normal-light features and fuse them with low- light features by a fusion branch. In addition, we propose a brightness-aware attention module to further improve the robustness of the network to the brightness. Extensive ex- perimental results on both real-captured and synthetic data show that our method outperforms existing state-of-the-art methods.

    Paper & Code & Demo

    Experimental Results

      Table 1. Quantitative comparison on the LOLv1 dataset.

      Table 2. Quantitative comparison on the LOLv2-Real and LOLv2-Synthetic dataset.

    Result Visualization

      Figure 1: Visual quality comparisons of different low-light image enhancement methods on the LOLv1 dataset.

      Figure 2: Visual quality comparisons of different low-light image enhancement methods on the LOLv2-Real dataset.

      Figure 3: Visual quality comparisons of different low-light image enhancement methods on the LOLv2-Synthetic dataset.

    Citation

    @InProceedings{Liu_2023_ICCV,
     author = {Liu, Yunlong and Huang, Tao and Dong, Weisheng and Wu, Fangfang and Li, Xin and Shi, Guangming},
     title = {Low-Light Image Enhancement with Multi-Stage Residue Quantization and Brightness-Aware Attention},
     booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
     month = {October},
     year = {2023},
     pages = {12140-12149}
    }

    Concat

    Yunlong Liu, Email: liuyunlong@stu.xidian.edu.cn
    Tao Huang, Email: thuang_666@stu.xidian.edu.cn
    Weisheng Dong, Email: wsdong@mail.xidian.edu.cn
    Fangfang Wu, Email: wufangfang@xidian.edu.cn
    Xin Li, Email: xli48@albany.edu
    Guangming Shi, Email: gmshi xidian@163.com