Skip to content

Basic Super-Resolution codes for development. Includes ESRGAN, SFT-GAN for training and testing.

License

Notifications You must be signed in to change notification settings

satsukiyatoshi/BasicSR

 
 

Repository files navigation

🚩 Add saving and loading training state. When resuming training, just pass a option with the name resume_state, like , "resume_state": "../experiments/debug_001_RRDB_PSNR_x4_DIV2K/training_state/200.state".

🚩 Use Python logging, and support PyTorch 1.0

An image super-resolution toolkit flexible for development. It now provides:

  1. PSNR-oriented SR models (e.g., SRCNN, SRResNet and etc). You can try different architectures, e.g, ResNet Block, ResNeXt Block, Dense Block, Residual Dense Block, Poly Block, Dual Path Block, Squeeze-and-Excitation Block, Residual-in-Residual Dense Block and etc.
  1. Enhanced SRGAN model (It can also train the SRGAN model). Enhanced SRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. For more details, please refer to Paper, ESRGAN repo. (If you just want to test the model, ESRGAN repo provides simpler testing codes.)

  1. SFTGAN model. It adopts Spatial Feature Transform (SFT) to effectively incorporate other conditions/priors, like semantic prior for image SR, representing by segmentation probability maps. For more details, please refer to Papaer, SFTGAN repo.

BibTex

@InProceedings{wang2018esrgan,
    author = {Wang, Xintao and Yu, Ke and Wu, Shixiang and Gu, Jinjin and Liu, Yihao and Dong, Chao and Qiao, Yu and Loy, Chen Change},
    title = {ESRGAN: Enhanced super-resolution generative adversarial networks},
    booktitle = {The European Conference on Computer Vision Workshops (ECCVW)},
    month = {September},
    year = {2018}
}
@InProceedings{wang2018sftgan,
    author = {Wang, Xintao and Yu, Ke and Dong, Chao and Loy, Chen Change},
    title = {Recovering realistic texture in image super-resolution by deep spatial feature transform},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2018}
}

Table of Contents

  1. Dependencies
  2. Codes
  3. Usage
  4. Datasets
  5. Pretrained models

Dependencies

Codes

./codes. We provide a detailed explaination of the code framework in ./codes.

We also provides:

  1. Some useful scripts. More details in ./codes/scripts.
  2. Evaluation codes, e.g., PSNR/SSIM metric.
  3. Wiki, e.g., How to make high quality gif with full (true) color, Matlab bicubic imresize and etc.

Usage

Data and model preparation

The common SR datasets can be found in Datasets. Detailed data preparation can be seen in codes/data.

We provide pretrained models in Pretrained models.

How to Test

Test ESRGAN (SRGAN) models

  1. Modify the configuration file options/test/test_esrgan.json
  2. Run command: python test.py -opt options/test/test_esrgan.json

Test SR models

  1. Modify the configuration file options/test/test_sr.json
  2. Run command: python test.py -opt options/test/test_sr.json

Test SFTGAN models

  1. Obtain the segmentation probability maps: python test_seg.py
  2. Run command: python test_sftgan.py

How to Train

Train ESRGAN (SRGAN) models

We use a PSNR-oriented pretrained SR model to initialize the parameters for better quality.

  1. Prepare datasets, usually the DIV2K dataset. More details are in codes/data and wiki (Faster IO speed).
  2. Prerapre the PSNR-oriented pretrained model. You can use the RRDB_PSNR_x4.pth as the pretrained model.
  3. Modify the configuration file options/train/train_esrgan.json
  4. Run command: python train.py -opt options/train/train_esrgan.json

Train SR models

  1. Prepare datasets, usually the DIV2K dataset. More details are in codes/data.
  2. Modify the configuration file options/train/train_sr.json
  3. Run command: python train.py -opt options/train/train_sr.json

Train SFTGAN models

Pretraining is also important. We use a PSNR-oriented pretrained SR model (trained on DIV2K) to initialize the SFTGAN model.

  1. First prepare the segmentation probability maps for training data: run test_seg.py. We provide a pretrained segmentation model for 7 outdoor categories in Pretrained models. We use Xiaoxiao Li's codes to train our segmentation model and transfer it to a PyTorch model.
  2. Put the images and segmentation probability maps in a folder as described in codes/data.
  3. Transfer the pretrained model parameters to the SFTGAN model.
    1. First train with debug mode and obtain a saved model.
    2. Run transfer_params_sft.py to initialize the model.
    3. We provide an initialized model named sft_net_ini.pth in Pretrained models
  4. Modify the configuration file in options/train/train_sftgan.json
  5. Run command: python train.py -opt options/train/train_sftgan.json

Datasets

Several common SR datasets are list below.

Name Datasets Short Description Google Drive Baidu Drive
Classical SR Training T91 91 images for training Google Drive Baidu Drive
BSDS200 A subset (train) of BSD500 for training
General100 100 images for training
Classical SR Testing Set5 Set5 test dataset
Set14 Set14 test dataset
BSDS100 A subset (test) of BSD500 for testing
urban100 100 building images for testing (regular structures)
manga109 109 images of Japanese manga for testing
historical 10 gray LR images without the ground-truth
2K Resolution DIV2K proposed in NTIRE17(800 train and 100 validation) Google Drive Baidu Drive
Flickr2K 2650 2K images from Flickr for training
DF2K A merged training dataset of DIV2K and Flickr2K
OST (Outdoor Scenes) OST Training 7 categories images with rich textures Google Drive Baidu Drive
OST300 300 test images of outdoor scences
PIRM PIRM PIRM self-val, val, test datasets Google Drive Baidu Drive

Pretrained models

We provide some pretrained models. More details about the pretrained models, please see experiments/pretrained_models.

You can put the downloaded models in the experiments/pretrained_models folder.

Name Modeds Short Description Google Drive Baidu Drive
ESRGAN RRDB_ESRGAN_x4.pth final ESRGAN model we used in our paper Google Drive Baidu Drive
RRDB_PSNR_x4.pth model with high PSNR performance
SFTGAN segmentation_OST_bic.pth segmentation model Google Drive Baidu Drive
sft_net_ini.pth sft_net for initilization
sft_net_torch.pth SFTGAN Torch version (paper)
SFTGAN_bicx4_noBN_OST_bg.pth SFTGAN PyTorch version
SRGAN*1 SRGAN_bicx4_303_505.pth SRGAN(with modification) Google Drive Baidu Drive
SRResNet*2 SRResNet_bicx4_in3nf64nb16.pth SRResNet(with modification) Google Drive Baidu Drive

😆 Image Viewer - HandyViewer

May try HandyViewer - an image viewer that you can switch image with a fixed zoom ratio, easy for comparing image details.


Acknowledgement

  • Code architecture is inspired by pytorch-cyclegan.
  • Thanks to Wai Ho Kwok, who contributes to the initial version.

About

Basic Super-Resolution codes for development. Includes ESRGAN, SFT-GAN for training and testing.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 93.4%
  • MATLAB 6.6%