- google drive or 百度网盘,
python scripts/data_preparation/sidd.py
to crop the train image pairs to 512x512 patches and make the data into lmdb format.
- google drive or 百度网盘,
- it should be like
./datasets/SIDD/val/input_crops.lmdb
and./datasets/SIDD/val/gt_crops.lmdb
-
NAFNet-SIDD-width32:
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/SIDD/NAFNet-width32.yml --launcher pytorch
-
NAFNet-SIDD-width64:
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/SIDD/NAFNet-width64.yml --launcher pytorch
-
Baseline-SIDD-width32:
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/SIDD/Baseline-width32.yml --launcher pytorch
-
Baseline-SIDD-width64:
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/SIDD/Baseline-width64.yml --launcher pytorch
-
8 gpus by default. Set
--nproc_per_node
to # of gpus for distributed validation.
-
NAFNet-SIDD-width32: google drive or 百度网盘
-
NAFNet-SIDD-width64: google drive or 百度网盘
-
Baseline-SIDD-width32: google drive or 百度网盘
-
Baseline-SIDD-width64: google drive or 百度网盘
- NAFNet-SIDD-width32:
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/test.py -opt ./options/test/SIDD/NAFNet-width32.yml --launcher pytorch
- NAFNet-SIDD-width64:
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/test.py -opt ./options/test/SIDD/NAFNet-width64.yml --launcher pytorch
- Baseline-SIDD-width32:
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/test.py -opt ./options/test/SIDD/Baseline-width32.yml --launcher pytorch
- Baseline-SIDD-width64:
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/test.py -opt ./options/test/SIDD/Baseline-width64.yml --launcher pytorch
- Test by a single gpu by default. Set
--nproc_per_node
to # of gpus for distributed validation.