You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Interesting paper! Wondering how the classifier will behave if someone come up with a new XXGan not included in the eval_config.py? We found the pre-trained models failed to ID images generated using WGAN for instance. Also do you have any plan to release the training code? Thx!
The text was updated successfully, but these errors were encountered:
Our models are trained on images in 256 pixels and tested on images that are mostly in 256 pixels. There are cases where we tested on much larger images and it still worked (e.g., SITD), but I would suspect our model will work more consistently towards images generated in 256 pixels. Might be good to check what is the size of the ID image, and will be good to try resizing the images to 256 pixels if it is much smaller (or else the center crop function will fill in 0s).
We observed our model preserves the "ranking" of real vs fake, but it can be miscalibrated sometimes. This is because there are domain gaps between datasets, and it remains challenging to fully preserve the accuracy if we test the model in an out-of-distribution scenario. However, it is really surprising that besides the calibration varies, the model is still able to separate the reals and fakes in a different threshold (not 50%) most of the time. To evaluate the separation, we use average precision (AP).
We are planning to release the training code, and you can feel free to train on your own dataset. Given a known test data distribution to work on (in this case ID images), it will be most likely that if the training set contains the same distribution will work better. Our paper purposefully train on only one method to evaluate the generalization, but ideally all method should be included in the training set for optimal performance.
Also, there will definitely be failure cases for our models, and it would be great that people can try out and what works and what doesn't!
Interesting paper! Wondering how the classifier will behave if someone come up with a new XXGan not included in the eval_config.py? We found the pre-trained models failed to ID images generated using WGAN for instance. Also do you have any plan to release the training code? Thx!
The text was updated successfully, but these errors were encountered: