You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the line 177 of "model.py", I think that self.gener_loss should be divided by float(len(scale_weight.keys())). Thus we have the whole average value of the generator. Note that self.gener_acc also gets its mean value by this way. In addition, the discriminator has the same problem in the line 149, self.discr_loss should be divided by float(len(scale_weight.keys())*3).
How do every random batch_size work?
I think initialize_batch_worker(*) in “prepare_dataset.py” put training data into a queue continuously. Then q_art.get() in “model.py” get training data. The training process of a batch size is finished, and then it continue to repeat the previous process. However, I don't know whether this understanding is correct.
I appreciate your work. I'm very grateful for your help.
The text was updated successfully, but these errors were encountered:
There are some problems as follows.
In the line 177 of "model.py", I think that self.gener_loss should be divided by float(len(scale_weight.keys())). Thus we have the whole average value of the generator. Note that self.gener_acc also gets its mean value by this way. In addition, the discriminator has the same problem in the line 149, self.discr_loss should be divided by float(len(scale_weight.keys())*3).
How do every random batch_size work?
I think initialize_batch_worker(*) in “prepare_dataset.py” put training data into a queue continuously. Then q_art.get() in “model.py” get training data. The training process of a batch size is finished, and then it continue to repeat the previous process. However, I don't know whether this understanding is correct.
I appreciate your work. I'm very grateful for your help.
The text was updated successfully, but these errors were encountered: