You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently attempting to reproduce the CIFAR-10 knowledge distillation results in your paper, which are based on inversion and adversarial losses.
Could you please provide more details on the following points?
For the distillation setting, should I use the same inversion hyperparameters as those listed under the CIFAR-10 directory? If not, could you kindly provide the corresponding hyperparameters?
As I understand, your distillation process is iterative. In other words, you first generate a batch of synthetic samples, then train the student model on these samples, and repeat this cycle multiple times so that the adversarial loss can take effect. Could you confirm this understanding? Also, could you please provide the number of synthetic samples generated in each distillation round, as well as the total number of rounds you conducted?
The text was updated successfully, but these errors were encountered:
A respectful salute to your work.
I am currently attempting to reproduce the CIFAR-10 knowledge distillation results in your paper, which are based on inversion and adversarial losses.
Could you please provide more details on the following points?
For the distillation setting, should I use the same inversion hyperparameters as those listed under the CIFAR-10 directory? If not, could you kindly provide the corresponding hyperparameters?
As I understand, your distillation process is iterative. In other words, you first generate a batch of synthetic samples, then train the student model on these samples, and repeat this cycle multiple times so that the adversarial loss can take effect. Could you confirm this understanding? Also, could you please provide the number of synthetic samples generated in each distillation round, as well as the total number of rounds you conducted?
The text was updated successfully, but these errors were encountered: