You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At Phase2, it goes well when 15 subnetworks are sampled at iteration 5.
But when it goes to interation 4, the subnetworks tripled as 3 networks in pareto front are obtained in previous iter.
45 subnetworks causes an OOM error in a single NVIDIA GeForce RTX 2080 Ti (11GB) in my case.
How much GPU memory does it run the whole experiment pipeline ?
I am able to reproduce the MoNuSeg mIoU and DICE for phase1 search. Parameters are the same, but the reported MACs in the training log seems 15-20x as the paper shows. It's because MACs are caculated with a smaller input size (resolution) to align with other works ?
best,
The text was updated successfully, but these errors were encountered:
Larger GPU memory is required. We suggest that it is better to reduce the network size or only save the model with best acc at the pareto front when using a 11GB gpu.
I checked the MACs on our gpu/cpu again by loading our pre-trained model. MACs and parameters are the same as we reported. However, we found that if we load it on Google Colab gpu, the Macs could be different. MACs can be changed according to different computing architectures. We will add this point in readme.
If you found MACs is different when you use gpu, maybe you can save the trained model and do the inference with cpu (should be quick) to compute the MACs, which should be the same as we reported.
Thanks for your extraordinary work !
At Phase2, it goes well when 15 subnetworks are sampled at iteration 5.
But when it goes to interation 4, the subnetworks tripled as 3 networks in pareto front are obtained in previous iter.
45 subnetworks causes an OOM error in a single NVIDIA GeForce RTX 2080 Ti (11GB) in my case.
How much GPU memory does it run the whole experiment pipeline ?
I am able to reproduce the MoNuSeg mIoU and DICE for phase1 search. Parameters are the same, but the reported MACs in the training log seems 15-20x as the paper shows. It's because MACs are caculated with a smaller input size (resolution) to align with other works ?
best,
The text was updated successfully, but these errors were encountered: