You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
It seems that KliFF natively supports only RMSE loss type functions. While users can define their own residual functions, the loss is always squared (lines 528, 575, 851 in loss.py). loss = 0.5 * np.linalg.norm(residual) ** 2 loss = torch.sum(torch.pow(residual, 2))
Describe the solution you'd like
It would be helpful to be able to specify the loss function that should be used, in combination with the residual, i.e. MAE, RMSE, etc. as a flag in the Loss constructor. Since different functions may be better or worse depending on the specific use case, there could be utility in allowing a simple switch in the loss computation.
Describe alternatives you've considered
Knowing that the square is always there, an alternative approach would be crafting a custom residual function to compensate, for example taking the square root of the residual. I believe this should be a feasible short term solution, but may make code harder to follow.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
It seems that KliFF natively supports only RMSE loss type functions. While users can define their own residual functions, the loss is always squared (lines 528, 575, 851 in loss.py).
loss = 0.5 * np.linalg.norm(residual) ** 2
loss = torch.sum(torch.pow(residual, 2))
Describe the solution you'd like
It would be helpful to be able to specify the loss function that should be used, in combination with the residual, i.e. MAE, RMSE, etc. as a flag in the Loss constructor. Since different functions may be better or worse depending on the specific use case, there could be utility in allowing a simple switch in the loss computation.
Describe alternatives you've considered
Knowing that the square is always there, an alternative approach would be crafting a custom residual function to compensate, for example taking the square root of the residual. I believe this should be a feasible short term solution, but may make code harder to follow.
The text was updated successfully, but these errors were encountered: