You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have you reduced the maximum epochs with the args or have you just stopped the training after 100 epochs? Our learning rate scheduler reaches the highest learning rate only after 10% of the total epochs. Since we use 500 epochs, the highest learning rate is reached after 50 epochs and then only slowly decreases. So maybe the learning rate is too high.
Still, the best mIoU should be higher after training for 100 epochs. Due to limited GPU memory, we never trained with batchsize 16. Do you have the same problems when training with batchsize 8?
I have trained the model for 100 epochs on SUNRGBD with pretrained ResNet34 on Imagenet, and the best mIoU is still 10%. Is it normal?
My batchsize is 16, and the other hyper-parameters are set as default.
The text was updated successfully, but these errors were encountered: