Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation results different from the results from the paper #44

Open
peterw2333 opened this issue Sep 30, 2024 · 2 comments
Open

Evaluation results different from the results from the paper #44

peterw2333 opened this issue Sep 30, 2024 · 2 comments

Comments

@peterw2333
Copy link

I ran the evaluation script for the provided checkpoint and found the results a bit different from the paper-reported results.
Especially, the FID and R-precision are higher, but the MultiModality and Multimodal Distance are significantly lower compared to the reported results. What could the reason for this discrepancy be?

The evaluation results from the checkpoint:

========== MM Distance Summary ==========
---> [ground truth] Mean: 3.7845 CInterval: 0.0008
---> [InterGen] Mean: 3.7990 CInterval: 0.0022
========== R_precision Summary ==========
---> [ground truth](top 1) Mean: 0.4249 CInt: 0.0048;(top 2) Mean: 0.6018 CInt: 0.0061;(top 3) Mean: 0.7042 CInt: 0.0049;
---> [InterGen](top 1) Mean: 0.4316 CInt: 0.0079;(top 2) Mean: 0.5864 CInt: 0.0088;(top 3) Mean: 0.6757 CInt: 0.0075;
========== FID Summary ==========
---> [ground truth] Mean: 0.2949 CInterval: 0.0101
---> [InterGen] Mean: 6.4828 CInterval: 0.1484
========== Diversity Summary ==========
---> [ground truth] Mean: 7.7497 CInterval: 0.0296
---> [InterGen] Mean: 7.8578 CInterval: 0.0538
========== MultiModality Summary ==========
---> [InterGen] Mean: 1.2164 CInterval: 0.0343

@22TonyFStark
Copy link

22TonyFStark commented Nov 17, 2024

I got similar results too, as shown below:
==================== Replication 0 ====================
Time: 2024-11-17 12:40:47.333736
---> [ground truth] MM Distance: 3.7849
---> [ground truth] R_precision: (top 1): 0.4195 (top 2): 0.5956 (top 3): 0.7017
---> [InterGen] MM Distance: 3.7979
---> [InterGen] R_precision: (top 1): 0.4347 (top 2): 0.5994 (top 3): 0.6837
Time: 2024-11-17 12:40:54.960424
---> [ground truth] FID: 0.3065
---> [InterGen] FID: 6.4763
Time: 2024-11-17 12:40:59.609012
---> [ground truth] Diversity: 7.7747
---> [InterGen] Diversity: 7.8846
Time: 2024-11-17 12:40:59.610366
---> [InterGen] Multimodality: 1.2961
!!! DONE !!!
==================== Replication 1 ====================
Time: 2024-11-17 13:04:51.516707
---> [ground truth] MM Distance: 3.7852
---> [ground truth] R_precision: (top 1): 0.4328 (top 2): 0.6098 (top 3): 0.6979
---> [InterGen] MM Distance: 3.7961
---> [InterGen] R_precision: (top 1): 0.4403 (top 2): 0.5900 (top 3): 0.6572
Time: 2024-11-17 13:04:58.432715
---> [ground truth] FID: 0.2869
---> [InterGen] FID: 6.4652
Time: 2024-11-17 13:05:02.393910
---> [ground truth] Diversity: 7.7341
---> [InterGen] Diversity: 7.8130
Time: 2024-11-17 13:05:02.395204
---> [InterGen] Multimodality: 1.2618
!!! DONE !!!

@ViVi-Vincent
Copy link

me too, especially MM Distance, the groundtruth in paper is 3.75 but in eval output is 3.78, do anyone knows why?
I use 2*4090

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants