Uncertainty-Driven Loss for Single
Image
Super-Resolution
Qian Ning1 Weisheng Dong1* Xin
Li2 Jinjian Wu1 Guangming Shi1
1School of Artificial
Intelligence, Xidian University
2West Virginia University
Figure
1. The overview of training SISR network with proposed L_UDL loss. The whole
training process can divided into two steps; the first step estimates the
uncertainty θ precisely and the second step
generates the final mean value y. In step1 shown in (a), the mean value y and
variance θ are pretrained by L_ESU loss. During
step2, as shown in (b), the mean value y network is trained by L_UDL loss,
while the network of inferring variance y is fixed. Note that the mean value y
network of step2 starts training from the pretrained network of step1. The
Nearest Upsampling denotes interpolation operator.
Abstract
In
low-level vision such as single image super-resolution (SISR), traditional MSE
or L_1 loss function treats every pixel equally with the assumption that the
importance of all pixels is the same. However, it has been long recognized that
texture and edge areas carry more important visual information than smooth
areas in photographic images. How to achieve such spatial adaptation in a
principled manner has been an open problem in both traditional model-based and
modern learning-based approaches toward SISR. In this paper, we propose a new
adaptive weighted loss for SISR to train deep networks focusing on challenging
situations such as textured and edge pixels with high uncertainty.
Specifically, we introduce variance estimation characterizing the uncertainty
on a pixel-by-pixel basis into SISR solutions so the targeted pixels in a
high-resolution image (mean) and their corresponding uncertainty (variance) can
be learned simultaneously. Moreover, uncertainty estimation allows us to
leverage conventional wisdom such as sparsity prior for regularizing SISR
solutions. Ultimately, pixels with large certainty (e.g., texture and edge pixels)
will be prioritized for SISR according to their importance to visual quality.
For the first time, we demonstrate that such uncertainty-driven loss can
achieve better results than MSE or L_1 loss for a wide range of network
architectures. Experimental results on three popular SISR networks show that
our proposed uncertainty-driven loss has achieved better PSNR performance than
traditional loss functions without any increased computation during testing.
Citation
Qian
Ning, Weisheng Dong, Xin Li, Jinjian Wu, Guangming Shi, " Uncertainty-Driven
Loss for Single Image Super-Resolution ", in Advances in Neural
Information Processing Systems (NeurIPS), 2021.
Bibtex
@inproceedings{ning2021uncertainty,
title={ Uncertainty-Driven
Loss for Single Image Super-Resolution },
author={ Ning
Qian and Dong, WeiSheng and Li, Xin and Wu, Jinjian and Shi, Guangming },
booktitle={Advances in Neural Information Processing Systems},
year={2021}
}
Download
Results
Contact
Qian Ning, Email: ningqian@stu.xidian.edu.cn
Weisheng Dong*, Email:
wsdong@mail.xidian.edu.cn
Xin Li, Email: xin.li@mail.wvu.edu
Jinjian Wu, Email:
jinjian.wu@mail.xidian.edu.cn
Guangming Shi, Email: gmshi@xidian.edu.cn