-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get the consistency measurement? #2
Comments
We totally followed the link (https://github.com/phoenix104104/fast_blind_video_consistency) to calculate lpips, please find the detail in there. thanks |
I have read their code. in evaluate_LPIPS.py they use LPIPS to get the perceptual distance between processed image P and their model output O. But, P and O are the same frame of the video. in evaludate_WarpError.py, they use optical flow predicted by FlowNet2 betweent frame1 and frame2 to warp frame2 to frame1, then calculate the L2 distance on non-occlude pixels. They do not use masks on LPIPS metric. |
@kigane Have you solved this problem? It strange that none of StylizedNeRF, StyleRF, Learning to Stylize Novel Views, etc. provide a calculation method of consistency. |
I have the same doubt as well. Why hasn't the calculation method for quantitative indicators been provided, even though it's the only evaluation criterion? |
Have you tried testing the generated results using the code from "warperror.py"? If so, are the results close to those in the paper? |
E(Oi, Oj) = LPIPS(Oi, Mi,j, Wi,j(Oj)), how to get the mask and how to apply it to lpips?
The text was updated successfully, but these errors were encountered: