Model-Guided Deep Hyperspectral Image

Super-resolution

 

Weisheng Dong    Chen Zhou    Fangfang Wu    Jinjian Wu  

 Guangming Shi    Xin Li

1School of Artificial Intelligence, Xidian University              

 

 

 

Figure 1. Architecture of the proposed model-guided deep convolutional network (MoG-DCN) for HSISR. (a) The overall
architecture of the proposed network; (b) the architecture of the reconstruction module; (c) the architecture of the degradation
operators, and (d) the architecture of the encoding and decoding blocks of the U-net denoiser

 

 

 

Abstract

The trade-off between spatial and spectral resolution is one of the fundamental issues in hyperspectral images (HSI). Given challenges with directly acquiring high-resolution hyperspectral images (HR-HSI), a compromised solution is to fuse a pair of images: one has high-resolution (HR) in the spatial domain but low-resolution (LR) in spectral-domain and the other vice versa. Model-based image fusion methods including pansharpening aim at reconstructing HR-HSI by solving manually designed objective functions. However, such hand-crafted prior often leads to inevitable performance degradation due to a lack of end-to-end optimization. Although several deep learningbased methods have been proposed for hyperspectral pansharpening, HR-HSI related domain knowledge has not been fully exploited, leaving room for further improvement. In this paper, we propose an iterative Hyperspectral image super-resolution (HSISR) algorithm based on a deep HSI denoiser to leverage both

domain knowledge likelihood and deep image prior. By taking the observation matrix of HSI into account during the end-toend optimization, we show how to unfold an iterative HSISR algorithm into a novel model-guided deep convolutional network (MoG-DCN). The representation of the observation matrix by sub-networks also allows the unfolded deep HSISR network to work with different HSI situations, which enhances the flexibility of MoG-DCN. Extensive experimental results are reported to demonstrate that the proposed MoG-DCN outperforms several leading HSISR methods in terms of both implementation cost and visual qualities..

 

 

 

Paper


                                                                                TIP  2021                                  Supplementary Material

 

Citation

Weisheng Dong, Chen Zhou, Fangfang Wu, Jinjian Wu, Guangming Shi, Xin Li, "Model-Guided Deep Hyperspectral Image Super-resolution", in IEEE TRANSACTIONS ON IMAGE PROCESSING.

 

 

Bibtex

@inproceedings{huang2021deep,

                     author    = { Weisheng Dong, Chen Zhou, Fangfang Wu, Jinjian Wu, Guangming Shi, Xin Li },

                     title     = { Model-Guided Deep Hyperspectral Image Super-resolution },

                     booktitle = { IEEE Transactions on Image Processing },

                     year      = {2021}

}

 

 

 

 

Download

 

 


                            

                                               Code                                        Training data (CAVE)                     Training data (HARVARD)          WV2  data (real data)

 

 

 

Results

CAVE dataset

Table 1. The average PSNR, SAM, ERGAS, and SSIM results of the test methods on the CAVE dataset for Gaussian blur kernel and scaling factors 8 and 16

Methods

Hysure[1]

CSTF[2]

NSSR[3]

BSR[4]

DBIN[5]

MHF-net[6]

LTTR[7]

CNN-FUS [8]

MoG-DCN(ours)

s = 8

PSNR(dB)

40.06

42.41

44.07

41.49

48.82

46.31

45.89

44.21

49.89

SAM

9.66

5.04

4.35

5.85

2.09

3.39

2.97

4.04

2.04

ERGAS

1.30

0.87

0.82

1.10

0.50

0.64

0.66

0.82

0.45

SSIM

0.976

0.979

0.987

0.983

0.996

0.994

0.993

0.989

0.996

s = 16

PSNR(dB)

36.74

41.17

38.97

40.83

43.70

44.51

42.48

40.37

46.84

SAM

15.05

6.17

6.73

6.38

3.00

4.00

4.25

5.85

2.62

ERGAS

0.88

0.54

1.46

0.59

0.46

0.38

0.47

0.59

0.31

SSIM

0.965

0.976

0.977

0.981

0.994

0.992

0.987

0.979

0.995

Harvard dataset

Table 1. The average PSNR, SAM, ERGAS, and SSIM results of the test methods on the Harvard dataset for Gaussian blur kernel and scaling factors 8 and 16

Methods

Hysure[1]

CSTF[2]

NSSR [3]

BSR [4]

DBIN [5]

MHF-net [6]

LTTR [7]

CNN-FUS [8]

MoG-DCN(ours)

s=8

PSNR(dB)

44.26

44.98

46.08

46.30

47.36

46.42

46.86

46.05

47.64

SAM

3.75

3.54

3.40

3.00

2.71

3.01

2.90

3.24

2.67

ERGAS

1.40

1.07

1.20

1.11

0.97

1.09

1.11

1.12

0.91

SSIM

0.983

0.980

0.985

0.986

0.988

0.987

0.987

0.985

0.988

s=16

PSNR(dB)

42.77

45.17

44.23

45.75

46.27

46.23

45.82

43.47

46.43

SAM

4.54

3.76

3.91

3.16

2.94

3.09

3.11

5.41

2.93

ERGAS

0.78

0.57

1.53

0.58

0.53

0.54

0.65

0.92

0.53

SSIM

0.981

0.983

0.983

0.986

0.987

0.987

0.986

0.966

0.987

 

 

Real Data Results

Figure 2. Reconstructed images of World View-2.

 

 

 

References

[1] M. Simoes, J. Bioucas-Dias, L. B. Almeida, and J. Chanussot, “A convex formulation for hyperspectral image superresolution via subspace-based regularization,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 6, pp. 3373–3388, 2014

[2] S. Li, R. Dian, L. Fang, and J. M. Bioucas-Dias, “Fusing hyperspectral and multispectral images via coupled sparse tensor factorization,” IEEE Transactions on Image Processing, vol. 27, no. 8, pp. 4118–4130, 2018.

[3] W. Dong, F. Fu, G. Shi, X. Cao, J. Wu, G. Li, and X. Li, “Hyperspectral image super-resolution via non-negative structured sparse representation,” IEEE Transactions on Image Processing, vol. 25, no. 5, pp. 2337– 2352, 2016.

[4] Q. Wei, J. Bioucas-Dias, N. Dobigeon, and J.-Y. Tourneret, “Hyperspectral and multispectral image fusion based on a sparse representation,”IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 7, pp. 3658–3668, 2015

[5] W. Wang, W. Zeng, Y. Huang, X. Ding, and J. Paisley, “Deep blind hyperspectral image fusion,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 4150–4159

[6] Q. Xie, M. Zhou, Q. Zhao, D. Meng, W. Zuo, and Z. Xu, “Multispectral and hyperspectral image fusion by ms/hs fusion net,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 1585–1594

[7] R. Dian, S. Li, and L. Fang, “Learning a low tensor-train rank representation for hyperspectral image super-resolution,” IEEE transactions on neural networks and learning systems, vol. 30, no. 9, pp. 2672–2683, 2019.

[8] R. Dian, S. Li, and X. Kang, “Regularizing hyperspectral and multispectral image fusion by cnn denoiser,” IEEE transactions on neural networks and learning systems, 2020

 

 

 

Contact

Weisheng Dong, Email: wsdong@mail.xidian.edu.cn

Chen Zhou, Email: zhouchen_7@163.com

Fangfang Wu, Email: 271076679@qq.com

Jinjian Wu, Email: jinjian.wu@mail.xidian.edu.cn

Guangming Shi, Email: gmshi@xidian.edu.cn

Xin Li, Email: xin.li@ieee.org