You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your work!
I have 2 questions:
In the paper, the author use cos lr but why fixsLR used in your code?
And now W3A2 acc has reported but I want to know whether you train W4A4 resnet18 reaching the acc in the paper?
The text was updated successfully, but these errors were encountered:
Ahhh, I cannot obtain better acc with cos decay, and I used step decay but fixed decay.
And the config.yaml is just a template. Please refer the configuration file in the example folder.
I have tried W4A4, but its accuracy is less than the authors' one.
Thanks for your reply,
Another quesiton I want to know is have you ever verified the effect of grad_scale
In my early quantization experiment(not lsq) weight lr is often much smaller than scale lr
but with grad scale it's approximate decay the lr for scale, it's confused for me
Thanks for your work!
I have 2 questions:
In the paper, the author use cos lr but why fixsLR used in your code?
And now W3A2 acc has reported but I want to know whether you train W4A4 resnet18 reaching the acc in the paper?
The text was updated successfully, but these errors were encountered: