-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducing Figure 1 WWT again #22
Comments
I know this is annoying and sorry to bother you! I'm trying to understand the proper and necessary parameters needed to achieve your results. Could you share the hyperparameters that worked best for you that may help to recover the curves in Figure 1? Thank you and I appreciate it. |
Hi, The hyperparameters you listed should be the ones we used for generating Figure 1. (The only difference is that we used sample_len=10.) The results you showed look very different to what we got. Here are some points I want to double-check. If you are using
Let me know if you still have problems reproducing it. |
Okay, I understand your points. That's exactly what I did.
which should be the same as in the parameters in
|
We used checkpoint 399 to generate it. I am not sure why it had bad resutls. Would you mind sharing the entire code folder (including |
Hi, thank you for offering to help look at the code. I'm honestly not sure what went wrong either. Here is the GDrive link with the code and epoch_id-399 data (generated samples, viz samples): https://drive.google.com/drive/folders/1M2QvzZjyEP9xevFYjNNurFmxEZKhVrEU?usp=sharing Let me know if I can help with anything else. There should also be a TensorBoard file there. |
I'm guessing you used the training version with GPUTask and |
I used without GPUTask and sample_len=10. |
Oh okay nice, then I'm guessing it's either 1) I got a bad run, so might rerun again. or 2) I set up the python environment wrong (idk bad tensorflow, some floating points stuff). If you are using conda or any env, can you share with me your Edit: wait if you also used the version without GPUTask, did you write an extra file to generate the time-series, my |
I used your generate.py file. Pyhton-3.7.10 and tensorflow-1.14.0 |
Thank you @alireza-msve very much for sharing the information! @CubicQubit Thanks for sharing the code you used. I wanted to debug this for you but unfortunately I just ran out of GPU hours on the cluster I am using several days ago, and it will take some time before I get the GPU hours. But here is some information that might be helpful:
Since @alireza-msve used exactly the code you shared, I would suggest running it again to double-check. If you still get bad autocorrelation plots, please let me know. |
If you have some time could please look at the below code for EPS = 0.55 def autocorr(X, Y):
def get_autocorr(feature):
` |
This EPS is for ensuring numerical stability when calculating autocorrelation, NOT the DP parameter. You should not change it. The epsilon in DP results is controlled by
|
Is it possible to share the code for DP-autocorrelation? |
The code is completely the same. You just generate data using https://github.com/fjxmlzn/DoppelGANger/tree/master/example_dp_generating_data, and then use #20 (comment) to draw autocorrelation. The DP parameter (including epsilon) is printed from DoppelGANger/gan/doppelganger.py Lines 925 to 933 in e732a4d
|
Got it, Thank you. |
This is weird. Here is a minimum code for computing these epsilons.
I am getting [187266998.24801102, 1641998.2480110272, 10.515654630508177, 1.451819290643501, 0.45555693961174304], which are the numbers of the arxiv version. If you get different numbers from it, then probably it is because of TF Privacy updates. I am using TF Privacy 0.5.1. |
Probably you are right. I run the above code again got similar values as previous. I am using TF privacy 0.6.0 |
Just double-checking, you mean you get the values you shared in #22 (comment) right? |
Yes |
Cool. Then it should be due to TF Privacy updates. |
@fjxmlzn @fxctydfty man i hate tf. do you guys see these errors when running
|
I got something closer after rerunning. Still not the same, but I take it. @fjxmlzn thank you for your help! Please close this issue. |
Great! This one looks close to what it should get. |
After training for 12.5 hrs, using the without GPUTaskScheduler, on local machine (RTX 3060), this was my plot for ACF. I used Python 3.7.0 and Tensorflow 1.14.0. What's going on haha |
Hi @fjxmlzn, @CubicQubit, @rllyryan, can you specify what code I must use to replicate Figure 1, with the right versions of tensorflow, tensorflow_privacy and Python that did you use for replicating the Figure? If you can clarify this to me I will be really glad because in this moments I'm obtaining always different results than the replication. If you can specify me the versions and the code itself I will be glad. Please also tell me what hyperparameters you used for the replication. It says here you are using TF Privacy 0.5.1 and tensorflow 1.14.0, can you reconfirm me that? |
Hi @fjxmlzn,
Thank you for the phenomenal effort on the repo. Additionally, thank you for sharing the code the calculating the autocorrelation, it helped me to match the autocorrelation curve in Figure 1.
I'm trying to reproduce the curve for DoppelGANger in Figure 1 below:
First of all, I've used the WWT data provided in the GDrive for training and testing. In addition, I have run the doppleframework provided in the
example_training (withoutGPU_TASK)
. I'm struggling to match the curve in Figure 1 with two differentsample_len
.sample_len=5
sample_len=10
I was wondering if you can help me understand what went wrong and how I can reproduce the performance in the paper.
The hyperparameters used are:
The text was updated successfully, but these errors were encountered: