Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

a problem of memory consumption #2

Closed
zhz120 opened this issue Mar 15, 2022 · 9 comments
Closed

a problem of memory consumption #2

zhz120 opened this issue Mar 15, 2022 · 9 comments

Comments

@zhz120
Copy link

zhz120 commented Mar 15, 2022

hello,I set the "--batch_size=1 --n_views 5 --iteration 4" to test dtu ,and this pre_train model is what you provide,but the memory consumption of gpu is 5.4g, your paper said it is about 2.4g,The same practice also differs on tanks datasets,my memory consumption is about 5g,the memory consumption of paper is 2.4g. and my input image resolution of dtu is also 1600*1152, i do not know why,can you help me? thanks

@FangjinhuaWang
Copy link
Owner

Hi,

Have a look at this from my previous work. This may be related to pytorch. I think you should get the same results with my environment.

@zhz120
Copy link
Author

zhz120 commented Mar 16, 2022

I appreciate your response,and I find somgthing interesting in your code, when I test your pre_train model of dtu in 3090,it cost about 5.4g . when i use pre_train in p40 and 1080ti,it cost about 3.9g,Their cuda version versions are all the same,cuda 11.4,and the version of pytorch >= 1.8.1,so i guess that may be I should use the same environment of you,cuda 10.1 and torch 1.4.
i also test your dtu pre_train model in tanks,with 3090,.but my Mean of intermediate is 54.91, the mean should be 56.22 in paper. I'll test it again with the same environment as you,may be it is fun.

@zhz120
Copy link
Author

zhz120 commented Mar 21, 2022

您好,请问在iterMVS中是否也有patchmatchnet中robust training strategy随机种子没有设置的问题,导致复现结果有时有偏差

@FangjinhuaWang
Copy link
Owner

The random seeds are all fixed. I retrain the models and get similar results on benchmarks.

@zhz120
Copy link
Author

zhz120 commented Mar 22, 2022

hello,但是我用的是一张v100显卡,环境也变成了cuda10.1和torch1.4.0,其它的设置也和您的代码给的一样,几乎没改,我重新训练了一遍,iteration= 4,batch= 4 ,16个epoch,虽然test的显存正常了,但是tanks上的分数还是只有55.24,您论文中是56.22,请问您认为是何种原因呢,谢谢!

@FangjinhuaWang
Copy link
Owner

Hi,

The results of my retrained model on the same machine are:
(1) Tanks&Temples: intermediate 55.90 (drop), advanced 33.39 (improve)
(2) ETH3D: training 67.07 (improve)

Maybe this can explain the difference (from repo of 5 second NeRF): training should be approximately deterministic since all the random numbers are seeded. floating point addition isn't commutative and the order of gradient accumulation depends on thread scheduling, so there can be slight numerical differences.

I am not an expert in this. If you find out the reason, please let me know.

@zhz120
Copy link
Author

zhz120 commented Mar 24, 2022

hello,I think I've found the reason,I retrain the model and test in dtu,the overall is 0.363,acc and comp is the same as data in paper,so I think the reason that lead different fscore in tanks and temple is the different camera pose, My camera pose is the sfm data of MVSNET,Can you provide me with a copy of the sfm data?I really want to do something meaningful based on you. Thank you very much

@FangjinhuaWang
Copy link
Owner

Hi,

I directly reuse the camera parameters from PatchmatchNet for Tanks & Temples. Acutally, the poses are the same as MVSNet. The only difference is that we reduce the depth range since the backgrounds are usually not evaluated in benchmark (MVSNet also provides a folder with small depth range to do the similar things). On ETH3D, we simply use colmap_input.py to convert the SFM file from ETH3D benchmark and do not manually reduce depth range since the GT covers the large-scale scenes.

@zhz120
Copy link
Author

zhz120 commented Mar 25, 2022

thanks,I will try it

@zhz120 zhz120 closed this as completed Mar 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants