You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your nice work! I have 2 questions, could you help me?
You said it's better to set batchsize=16, you mean total batchsize or each gpu batchsize? There's a hyper-param 'batch_size_per_gpu' in your code, if I use 2 GPU, should I set it 8 or 16?
Did you train on train_aug dataset, then frozen bn and finetuning on train dataset?
Thank you for your nice work! I have 2 questions, could you help me?
it's my hyperparam:
python -u train_voc.py
--data_root_path=/home/work/dataset/VOCdevkit/VOC2012_aug
--checkpoint_dir=./checkpoints/
--result_filepath=./Results/
--backbone=resnet101
--output_stride=16
--gpu=0,1
--batch_size_per_gpu=16
--dataset=voc2012_aug
--base_size=513
--crop_size=513
--freeze_bn=False
--weight_decay=4e-5
--lr=0.007
--iter_max=30000
--poly_power=0.9
The text was updated successfully, but these errors were encountered: