🚩 Add saving and loading training state. When resuming training, just pass a option with the name resume_state
, like , "resume_state": "../experiments/debug_001_RRDB_PSNR_x4_DIV2K/training_state/200.state"
.
🚩 Use Python logging, and support PyTorch 1.0
An image super-resolution toolkit flexible for development. It now provides:
- PSNR-oriented SR models (e.g., SRCNN, SRResNet and etc). You can try different architectures, e.g, ResNet Block, ResNeXt Block, Dense Block, Residual Dense Block, Poly Block, Dual Path Block, Squeeze-and-Excitation Block, Residual-in-Residual Dense Block and etc.
- Enhanced SRGAN model (It can also train the SRGAN model). Enhanced SRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. For more details, please refer to Paper, ESRGAN repo. (If you just want to test the model, ESRGAN repo provides simpler testing codes.)
- SFTGAN model. It adopts Spatial Feature Transform (SFT) to effectively incorporate other conditions/priors, like semantic prior for image SR, representing by segmentation probability maps. For more details, please refer to Papaer, SFTGAN repo.
@InProceedings{wang2018esrgan,
author = {Wang, Xintao and Yu, Ke and Wu, Shixiang and Gu, Jinjin and Liu, Yihao and Dong, Chao and Qiao, Yu and Loy, Chen Change},
title = {ESRGAN: Enhanced super-resolution generative adversarial networks},
booktitle = {The European Conference on Computer Vision Workshops (ECCVW)},
month = {September},
year = {2018}
}
@InProceedings{wang2018sftgan,
author = {Wang, Xintao and Yu, Ke and Dong, Chao and Loy, Chen Change},
title = {Recovering realistic texture in image super-resolution by deep spatial feature transform},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}
- Python 3 (Recommend to use Anaconda)
- PyTorch >= 0.4.0
- NVIDIA GPU + CUDA
- Python packages:
pip install numpy opencv-python lmdb
- [option] Python packages:
pip install tensorboardX
, for visualizing curves.
./codes
. We provide a detailed explaination of the code framework in ./codes
.
We also provides:
- Some useful scripts. More details in
./codes/scripts
. - Evaluation codes, e.g., PSNR/SSIM metric.
- Wiki, e.g., How to make high quality gif with full (true) color, Matlab bicubic imresize and etc.
The common SR datasets can be found in Datasets. Detailed data preparation can be seen in codes/data
.
We provide pretrained models in Pretrained models.
- Modify the configuration file
options/test/test_esrgan.json
- Run command:
python test.py -opt options/test/test_esrgan.json
- Modify the configuration file
options/test/test_sr.json
- Run command:
python test.py -opt options/test/test_sr.json
- Obtain the segmentation probability maps:
python test_seg.py
- Run command:
python test_sftgan.py
We use a PSNR-oriented pretrained SR model to initialize the parameters for better quality.
- Prepare datasets, usually the DIV2K dataset. More details are in
codes/data
and wiki (Faster IO speed). - Prerapre the PSNR-oriented pretrained model. You can use the
RRDB_PSNR_x4.pth
as the pretrained model. - Modify the configuration file
options/train/train_esrgan.json
- Run command:
python train.py -opt options/train/train_esrgan.json
- Prepare datasets, usually the DIV2K dataset. More details are in
codes/data
. - Modify the configuration file
options/train/train_sr.json
- Run command:
python train.py -opt options/train/train_sr.json
Pretraining is also important. We use a PSNR-oriented pretrained SR model (trained on DIV2K) to initialize the SFTGAN model.
- First prepare the segmentation probability maps for training data: run
test_seg.py
. We provide a pretrained segmentation model for 7 outdoor categories in Pretrained models. We use Xiaoxiao Li's codes to train our segmentation model and transfer it to a PyTorch model. - Put the images and segmentation probability maps in a folder as described in
codes/data
. - Transfer the pretrained model parameters to the SFTGAN model.
- First train with
debug
mode and obtain a saved model. - Run
transfer_params_sft.py
to initialize the model. - We provide an initialized model named
sft_net_ini.pth
in Pretrained models
- First train with
- Modify the configuration file in
options/train/train_sftgan.json
- Run command:
python train.py -opt options/train/train_sftgan.json
Several common SR datasets are list below.
Name | Datasets | Short Description | Google Drive | Baidu Drive |
---|---|---|---|---|
Classical SR Training | T91 | 91 images for training | Google Drive | Baidu Drive |
BSDS200 | A subset (train) of BSD500 for training | |||
General100 | 100 images for training | |||
Classical SR Testing | Set5 | Set5 test dataset | ||
Set14 | Set14 test dataset | |||
BSDS100 | A subset (test) of BSD500 for testing | |||
urban100 | 100 building images for testing (regular structures) | |||
manga109 | 109 images of Japanese manga for testing | |||
historical | 10 gray LR images without the ground-truth | |||
2K Resolution | DIV2K | proposed in NTIRE17(800 train and 100 validation) | Google Drive | Baidu Drive |
Flickr2K | 2650 2K images from Flickr for training | |||
DF2K | A merged training dataset of DIV2K and Flickr2K | |||
OST (Outdoor Scenes) | OST Training | 7 categories images with rich textures | Google Drive | Baidu Drive |
OST300 | 300 test images of outdoor scences | |||
PIRM | PIRM | PIRM self-val, val, test datasets | Google Drive | Baidu Drive |
We provide some pretrained models. More details about the pretrained models, please see experiments/pretrained_models
.
You can put the downloaded models in the experiments/pretrained_models
folder.
Name | Modeds | Short Description | Google Drive | Baidu Drive |
---|---|---|---|---|
ESRGAN | RRDB_ESRGAN_x4.pth | final ESRGAN model we used in our paper | Google Drive | Baidu Drive |
RRDB_PSNR_x4.pth | model with high PSNR performance | |||
SFTGAN | segmentation_OST_bic.pth | segmentation model | Google Drive | Baidu Drive |
sft_net_ini.pth | sft_net for initilization | |||
sft_net_torch.pth | SFTGAN Torch version (paper) | |||
SFTGAN_bicx4_noBN_OST_bg.pth | SFTGAN PyTorch version | |||
SRGAN*1 | SRGAN_bicx4_303_505.pth | SRGAN(with modification) | Google Drive | Baidu Drive |
SRResNet*2 | SRResNet_bicx4_in3nf64nb16.pth | SRResNet(with modification) | Google Drive | Baidu Drive |
😆 Image Viewer - HandyViewer
May try HandyViewer - an image viewer that you can switch image with a fixed zoom ratio, easy for comparing image details.
- Code architecture is inspired by pytorch-cyclegan.
- Thanks to Wai Ho Kwok, who contributes to the initial version.