-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about sigma_sigmoid #39
Comments
Thank you very much for your interest in our work. The reason here is that our sdf prediction from the decoder is not in the unit of meter. It is scaled by this sigma_sigmoid. You may also check the bce Line 17 in 0fbaf8a
The prediction is compared with the scaled label instead of the label itself. In this case, when calculating the gradient, since the sample point coordinates are already in the unit of meter while the sdf predictions are not, we need to scale the gradient to align the unit. |
Thanks for the detailed answer. I have a follow-up question. I am building upon your code for my research project. I am adding a loss based on the Hessian of surface points. Hence, I need to calculate the second derivative. If I differentiated the scaled gradient w.r.t the surface points/samples to get the Hessian, would I need to scale the Hessian? What do you think? Thanks in advance |
I think that you do not need to scale the Hessian in this case. |
Hi! Thanks for the great work. I have a question regarding the parameter sigma_sigmoid.
In this part of the code:
SHINE_mapping/shine_batch.py
Line 142 in 0fbaf8a
I noticed that you scale the SDF gradients using sigma_sigmoid.
When calculating the Eikonal loss, we typically use L2 loss to enforce the gradients to have a norm of 1. However, with such scaling, it seems you reduce the error that could have been calculated without scaling.
I am wondering if you are doing this to scale the Eikonal loss overall. If that is the case, couldn’t you just multiply the Eikonal loss directly by a scaling factor? Or is there another reason behind this approach?
Thanks in advance.
The text was updated successfully, but these errors were encountered: