-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ask for "practical tricks" for Hungarian loss #3
Comments
To reproduce the results in the paper, you don't need to change anything in the code. The code is exactly what produced the results in the paper. Whoops, lines 233 to 239 are indeed not used, well spotted! I was intending the bounding box and state prediction experiments to use the average set loss, but clearly I was using just the final set. The DSPN results should be better with the average set loss, but I'll have to try and see if the hyperparams still work. |
I tried a run where I used the average set loss with the Hungarian loss by making Lines 240 to 242 in 2515f61
The following single run result corresponds to Table 2 in the paper.
AP 98 and AP 99 are worse, whereas AP 50, AP 90, and AP 95 are all better than in the paper. The following single run result corresponds to Table 3 in the paper.
This time, almost all results are better than in the paper (especially AP 0.5 and AP 0.25) without changing any hyperparameters. More iterations also no longer makes some of the results worse, so the algorithm is simply more stable. Hooray! |
Thank you for the detailed response! |
I'm updating the paper with the fixed results in an appendix soon. |
My pleasure :) |
Hi, it seems that the code use the average set loss (in "practical tricks" in Section 3.2) for the Chamfer loss but not for the Hungarian loss. Should I need to use the average one or only the final one to reproduce your bounding box and state prediction results?
Also, line 233-239 seems not used in the left of the code!
The text was updated successfully, but these errors were encountered: