Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

simple_trainer_mcmc.py doesn't reproduce the evaluation results on Mip360 dataset #323

Closed
yt2639 opened this issue Aug 8, 2024 · 3 comments

Comments

@yt2639
Copy link

yt2639 commented Aug 8, 2024

Hi, I wanted to reproduce the evaluation results as stated in this commit, 1cc3d22, on Mip360 data. But I cannot reproduce them.

Right now, the metrics for the 7 scenes of Mip360 are as follows.
image

MCMC looks a lot worse than Splatfacto.

I am simply using the simple_trainer_mcmc.py with default settings (I didn't change any config params) and my command is shown below:

python simple_trainer_mcmc.py \
    --data_dir mipnerf360_data/${scene}/ --data_factor 4 \
    --result_dir mipnerf360_data/${scene}/3dgsmcmc_4x \
    --test_every 8 \
    --disable_viewer 

Could you point to me how I can reproduce the results? Thank you!

Here are some example rendering comparisons between Splatfacto (left) and MCMC (right).
bicycle
garden

@maturk
Copy link
Collaborator

maturk commented Aug 8, 2024

Script used to run benchmark and stats for the above mentioned commit can be found in this pr #324. Let me know if you have any issues. The evaluation metrics where run on a cluster of A100 GPUs with PyTorch 2.1.2 and cuda-toolkit 11.8. Please note, earlier versions of PyTorch, like v. 2.0.1, have known issues for Splatfacto, so please do not use this PyTorch version for Splatfacto metrics.

@maturk
Copy link
Collaborator

maturk commented Aug 8, 2024

I think hardcoding the data-factor as 4 is wrong. The data-factors should be (4, 2, 2, 4, 4, 2, 2) for scenes "bicycle","bonsai", "counter", "garden", "stump", "kitchen", "room" respectively.

@yt2639
Copy link
Author

yt2639 commented Aug 8, 2024

@maturk Thank you very much for your prompt response and providing the benchmark script! I did get the same results using the above benchmark script.

I also figured out the reason why my previous results were bad - I didn't explicitly run the evaluation command and seems like simple_trainer_mcmc.py automatically evaluates using the checkpoint at 7000 step. Therefore my previous results were evaluated at 7000 step.

Thank you again for your help!

@yt2639 yt2639 closed this as completed Aug 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants