Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about evaluation on endonerf dataset. #7

Closed
Greatxcw opened this issue Dec 22, 2023 · 2 comments
Closed

Question about evaluation on endonerf dataset. #7

Greatxcw opened this issue Dec 22, 2023 · 2 comments

Comments

@Greatxcw
Copy link

Thank you very much for your wonderful work.
When I was reproducing your code in endosurf dataset, the following problem appeared:

  1. I follow the Readme to prepare the Endonerf dataset,but the evaluation result is not very well,here's how it went and what it turned out to be.
root@b342aea81d4c:/data/endosurf# CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py --cfg configs/endosurf/baseline/base_pull.yml --mode test_2d
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
/usr/local/lib/python3.8/dist-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
/usr/local/lib/python3.8/dist-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Loading model from: /usr/local/lib/python3.8/dist-packages/lpips/weights/v0.1/vgg.pth
[Mode] test_2d
[Load data] dataset: endonerf, scene: pulling_soft_tissues
[Load data] complete!
[Experiment] exp_dir: logs/endosurf/base-endonerf-pulling_soft_tissues
[Load checkpoints] logs/endosurf/base-endonerf-pulling_soft_tissues/ckpt.tar.
DEMO|Render RGBD images
DEMO|Use testset with 8 frames
psnr_rgb_vr: 26.093102359398117                                                                                                                                                                                                              
DEMO|ssim_rgb_vr: 0.8609503507614136
DEMO|lpips_rgb_vr: 0.19325168430805206
DEMO|rmse_d_vr: 4.342188562685641

I think maybe the *.pklfiles generated by data/endonerf/preprocess.py are wrong,so i download the *.pklfiles which you have provided,here's how it went and what it turned out to be.

root@b342aea81d4c:/data/endosurf# CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py --cfg configs/endosurf/baseline/base_pull.yml --mode test_2d
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
/usr/local/lib/python3.8/dist-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
/usr/local/lib/python3.8/dist-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Loading model from: /usr/local/lib/python3.8/dist-packages/lpips/weights/v0.1/vgg.pth
[Mode] test_2d
[Load data] dataset: endonerf, scene: pulling_soft_tissues
[Load data] complete!
[Experiment] exp_dir: logs/endosurf/base-endonerf-pulling_soft_tissues
[Load checkpoints] logs/endosurf/base-endonerf-pulling_soft_tissues/ckpt.tar.
DEMO|Render RGBD images
DEMO|Use testset with 8 frames
psnr_rgb_vr: 35.01117011198838                                                                                                                                                                                                               
DEMO|ssim_rgb_vr: 0.9555621147155762
DEMO|lpips_rgb_vr: 0.1203545331954956
DEMO|rmse_d_vr: 2.019412280709101

psnr_rgb_vr,ssim_rgb_vr,lpips_rgb_vr are consistent with the results in the paper,I wonder why this is happening?

2.The rmse_d_vr is not consistent with the results in the paper,so i check the dataset you offered,and find some depth picture is black,like this:
image
I wonder if that's the way it is or if it's something else...

That's all I have, I hope you can help me answer it.

@Ruyi-Zha
Copy link
Owner

Ruyi-Zha commented Jan 1, 2024

  1. Thanks for your interest in our work. There are some random operations in preprocess.py, e.g., point cloud downsampling, which may cause perturbation of poses. If you use the provided pre-trained model and generate *.pkl by yourself, you are likely to get poor evaluation results (due to pose differences). To get the correct results for your own *.pkl, please re-train the network.

  2. There is a bug in code about RMSE and it is now fixed. Please see this issue.

@Greatxcw
Copy link
Author

Greatxcw commented Jan 8, 2024

OK,Thank you for your reply.

@Ruyi-Zha Ruyi-Zha closed this as completed Jan 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants