Skip to content

unicorn343/Learning-to-See-in-the-Dark

 
 

Repository files navigation

Learning-to-See-in-the-Dark

This is a Tensorflow implementation of Learning to See in the Dark in CVPR 2018, by Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun.

Project Website
Paper

teaser

This code includes the default model for training and testing on the See-in-the-Dark (SID) dataset.

Demo Video

https://youtu.be/qWKUFK7MWvg

Setup

Requirement

Required python (version 2.7) libraries: Tensorflow (>=1.1) + Scipy + Numpy + Rawpy.

Tested in Ubuntu + Intel i7 CPU + Nvidia Titan X (Pascal) with Cuda (>=8.0) and CuDNN (>=5.0). CPU mode should also work with minor changes but not tested.

Dataset

Update Aug, 2018: We found some misalignment with the ground-truth for image 10034, 10045, 10172. Please remove those images for quantitative results, but they still can be used for qualitative evaluations.

We provide the dataset by Sony and Fuji cameras. To download the data, you can run

python download_dataset.py

or you can download it directly from Google drive for the Sony (25 GB) and Fuji (52 GB) sets.

There is download limit by Google drive in a fixed period of time. If you cannot download because of this, try these links: Sony (25 GB) and Fuji (52 GB).

New: we provide file parts in Baidu Drive now. After you download all the parts, you can combine them together by running: "cat SonyPart* > Sony.zip" and "cat FujiPart* > Fuji.zip".

The file lists are provided. In each row, there are a short-exposed image path, the corresponding long-exposed image path, camera ISO and F number. Note that multiple short-exposed images may correspond to the same long-exposed image.

The file name contains the image information. For example, in "10019_00_0.033s.RAF", the first digit "1" means it is from the test set ("0" for training set and "2" for validation set); "0019" is the image ID; the following "00" is the number in the sequence/burst; "0.033s" is the exposure time 1/30 seconds.

Testing

  1. Clone this repository.
  2. Download the pretrained models by running
python download_models.py
  1. Run "python test_Sony.py". This will generate results on the Sony test set.
  2. Run "python test_Fuji.py". This will generate results on the Fuji test set.

By default, the code takes the data in the "./dataset/Sony/" folder and "./dataset/Fuji/". If you save the dataset in other folders, please change the "input_dir" and "gt_dir" at the beginning of the code.

Training new models

  1. To train the Sony model, run "python train_Sony.py". The result and model will be save in "result_Sony" folder by default.
  2. To train the Fuji model, run "python train_Fuji.py". The result and model will be save in "result_Fuji" folder by default.

By default, the code takes the data in the "./dataset/Sony/" folder and "./dataset/Fuji/". If you save the dataset in other folders, please change the "input_dir" and "gt_dir" at the beginning of the code.

Loading the raw data and proccesing by Rawpy takes significant more time than the backpropagation. By default, the code will load all the groundtruth data processed by Rawpy into memory without 8-bit or 16-bit quantization. This requires at least 64 GB RAM for training the Sony model and 128 GB RAM for the Fuji model. If you need to train it on a machine with less RAM, you may need to revise the code and use the groundtruth data on the disk. We provide the 16-bit groundtruth images processed by Rawpy: Sony (12 GB) and Fuji (22 GB).

Questions

If you have questions about the code and data, please email to [email protected].

Citation

If you use our code and dataset for research, please cite our paper:

Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun, "Learning to See in the Dark", in CVPR, 2018.

License

MIT License.

About

Learning to See in the Dark. CVPR 2018

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%