-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
simple_trainer_mcmc.py doesn't reproduce the evaluation results on Mip360 dataset #323
Comments
Script used to run benchmark and stats for the above mentioned commit can be found in this pr #324. Let me know if you have any issues. The evaluation metrics where run on a cluster of A100 GPUs with PyTorch 2.1.2 and cuda-toolkit 11.8. Please note, earlier versions of PyTorch, like v. 2.0.1, have known issues for Splatfacto, so please do not use this PyTorch version for Splatfacto metrics. |
I think hardcoding the data-factor as 4 is wrong. The data-factors should be |
@maturk Thank you very much for your prompt response and providing the benchmark script! I did get the same results using the above benchmark script. I also figured out the reason why my previous results were bad - I didn't explicitly run the evaluation command and seems like simple_trainer_mcmc.py automatically evaluates using the checkpoint at 7000 step. Therefore my previous results were evaluated at 7000 step. Thank you again for your help! |
Hi, I wanted to reproduce the evaluation results as stated in this commit, 1cc3d22, on Mip360 data. But I cannot reproduce them.
Right now, the metrics for the 7 scenes of Mip360 are as follows.
MCMC looks a lot worse than Splatfacto.
I am simply using the simple_trainer_mcmc.py with default settings (I didn't change any config params) and my command is shown below:
Could you point to me how I can reproduce the results? Thank you!
Here are some example rendering comparisons between Splatfacto (left) and MCMC (right).
The text was updated successfully, but these errors were encountered: